text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
Optical probe on doping modulation of magnetic Weyl semimetal Co_3Sn_2S_2 R. Y. Chen January 14, 2024 ========================================================================= camerareadyfirstpage empty plain plainChiplet technology enables the integration of an increasing number of transistors on a single accelerator with higher yield in the post-Moore era, addressing the immense computational demands arising from rapid AI advancements. However, it also introduces more expensive packaging costs and costly Die-to-Die (D2D) interfaces, which require more area, consume higher power, and offer lower bandwidth than on-chip interconnects. Maximizing the benefits and minimizing the drawbacks of chiplet technology is crucial for developing large-scale DNN chiplet accelerators, which poses challenges to both architecture and mapping. Despite its importance in the post-Moore era, methods to address these challenges remain scarce.To bridge the gap, we first propose a layer-centric encoding method to encode Layer-Pipeline (LP) spatial mapping for large-scale DNN inference accelerators and depict the optimization space of it. Based on it, we analyze the unexplored optimization opportunities within this space, which play a more crucial role in chiplet scenarios. Based on the encoding method and a highly configurable and universal hardware template, we propose an architecture and mapping co-exploration framework, Gemini, to explore the design and mapping space of large-scale DNN chiplet accelerators while taking monetary cost (MC), performance, and energy efficiency into account. Compared to the state-of-the-art (SOTA) Simba architecture with SOTA Tangram LP Mapping, Gemini's co-optimized architecture and mapping achieve, on average, 1.98× performance improvement and 1.41× energy efficiency improvement simultaneously across various DNNs and batch sizes, with only a 14.3% increase in monetary cost. Moreover, we leverage Gemini to uncover intriguing insights into the methods for utilizing chiplet technology in architecture design and mapping DNN workloads under chiplet scenarios. The Gemini framework is open-sourced at <https://github.com/SET-Scheduling-Project/GEMINI-HPCA2024>. § INTRODUCTIONAs Deep Neural Networks (DNNs) tackle increasingly complex problems, their size and complexity grow rapidly, resulting in increased computing and storage demands <cit.>. While applying more advanced technology and enlarging single chip sizes have led to the development of many large-scale monolithic accelerators with tens of billions of transistors <cit.>, the end of Moore's Law <cit.> and limited photomask size pose significant challenges to further transistor integration.Chiplet technology, using advanced packaging to combine small functional dies, offers a promising solution to overcome these limitations and enable continuous transistor integration. Chiplet-based DNN inference accelerators, such as Simba with 36 dies <cit.>, have emerged. However, this technology introduces new challenges for architectural design and DNN mapping for large-scale chiplet accelerators, which are outlined below: For architecture design, the main challenge is determining the optimal chiplet granularity. While chiplet technology improves area limits and yield, it introduces higher packaging expenses and D2D interconnection costs. D2D interconnections are more energy and area-intensive, but provide lower bandwidth than on-chip lines. All the above adverse effects are collectively referred to as Chiplet Costs. Consequently, A trade-off arises between using more smaller chiplets for better yield and fewer larger chiplets for lower Chiplet Costs. Balancing this trade-off remains an unsolved challenge.For DNN mapping, the main challenges stem from the larger scale enabled by chiplet technology, and the costly D2D links. For the first challenge, maintaining high utilization and energy efficiency becomes increasingly difficult with the growing scale of accelerators. LP mapping, in which multiple layers are spatially mapped onto the accelerator, is widely employed by large-scale accelerators in both academia <cit.> and industry <cit.> to relieve the challenge. The core of LP mapping is spatial mapping (SPM), which determines which part of which layer is allocated to which core, and significantly impacts the performance and energy efficiency of large-scale accelerators. Despite the importance of LP SPM, most current strategies are still heuristic <cit.>. The problems and optimization space of LP SPM have not been clearly defined, exhaustively explored, or fully understood. This limitation constrains the ability to fully leverage optimization opportunities in LP mapping, which becomes increasingly important as the scale of accelerators and the structural complexity of DNNs increase. The second challenge lies in that D2D links tend to consume more energy and provide lower bandwidth than on-chip lines. Therefore, devising spatial mapping strategies that can automatically reduce D2D communications is vital for enhancing chiplet accelerator performance and efficiency, and fully harnessing the benefits provided by chiplet technology. However, there is a noticeable absence of such approaches in the existing literature.The above analysis reveals that the employment of chiplet technology introduces intricate trade-offs and challenges. Thus, maximizing the benefits of chiplet technology and minimizing its disadvantages is crucial for developing DNN chiplet accelerators. This goal not only poses challenges for architectural design but also necessitates more efficient mapping schemes that effectively utilize larger accelerators while reducing the expensive communication costs between chiplets. To address these challenges, we have made the following contributions: (1) We propose a layer-centric encoding method for representing LP SPM schemes in many-core chiplet DNN inference accelerators. Leveraging this encoding method, we delineate the optimization space for LP mapping, calculate its immense size, which significantly outstrips existing heuristic strategies, and analyze the latent optimization opportunities concealed within this space. These opportunities become particularly crucial under post-Moore chiplet times. To the best of our knowledge, this is the first work that clearly and systematically defines the optimization space of LP SPM for DNN inference accelerators.(2) Based on the aforementioned encoding and a highly configurable and universal hardware template, we develop Gemini, a mapping and architecture co-exploration framework for large-scale DNN chiplet accelerators. Gemini includes two main engines: the Mapping Engine and the Monetary Cost Evaluator. In the Mapping Engine, a Simulated Annealing (SA) algorithm with five specifically-designed operators is developed to explore the extensive space defined by our encoding method and automatically minimize costly D2D communication. The Monetary Cost Evaluator assesses the MC of accelerators with varying architectural parameters. To the best of our knowledge, Gemini is the first framework to jointly explore the optimization space of mapping and architecture for large-scale DNN chiplet accelerators, considering not only energy consumption and performance but also MC. The Gemini framework is open-sourced at <https://github.com/SET-Scheduling-Project/GEMINI-HPCA2024>. (3) We utilize Gemini to reveal several intriguing insights about utilizing chiplet technology in architecture design and mapping DNN workloads under chiplet scenarios as follows. (1) With the support of advanced mapping, moderately partitioning an accelerator into chiplets can reduce MC with nearly no loss of performance and energy efficiency. However, overly fine-grained chiplet partitions will simultaneously worsen MC, performance, and energy efficiency. (2) Regarding the granularity of computing cores, both energy efficiency and performance initially increase (albeit at a decelerating pace) as the granularity becomes finer (i.e., more cores). However, they experience a slight decline thereafter. Moreover, the MC tends to rise as the granularity of computing cores becomes finer, corresponding to an increase in the number of cores. (3) With the right design considerations and aided by the Gemini framework, employing a single chiplet for multiple accelerators becomes an effective approach for DNN inference accelerators. However, we also demonstrate the limitations of a “one-size-fits-all” method, particularly highlighting the impracticality of designing a small-scale chiplet intended to cater to a very wide spectrum of computational power requirements across diverse scenarios. (4) We study the property of spatial mapping, and find that rather than allocating chiplets for each layer in a gathered manner, gathering clusters with heavy data transfer together is more important and beneficial. (4) Compared to the SOTA Simba architecture with Tangram SPM, Gemini's co-optimized architecture and mapping, on average, achieve 1.98× performance and 1.41× energy efficiency simultaneously across various DNNs and batch sizes, with only 14.3% increase in MC. § BACKGROUND AND MOTIVATION§.§ Trade-offs Introduced by Chiplet Chiplet technology is fundamentally an advanced packaging solution. It integrates multiple dies (chiplets) onto a substrate, which could be an organic substrate <cit.> or a silicon interposer <cit.>. This integration is achieved through high-density interconnections, allowing the dies to function collectively as a single chip. This technology offers numerous benefits, but it also incurs additional costs. In this section, we discuss the trade-offs of employing chiplet technology. As shown in Fig. <ref>, the advantages of introducing chiplets are fourfold. Firstly, it increases the overall yield of a chip. By partitioning a large-scale monolithic chip into smaller chiplets, the overall yield of the chip can be significantly improved. For example, at the 7nm technology node, the yield of an 800 mm^2 chip and a 200 mm^2 chip are approximately 18% and 75%, respectively <cit.>. Second, it extends the area limit of a chip, as advanced substrates (e.g., organic substrates <cit.> or interposers <cit.>) have a larger area limit than monolithic chips (858mm^2 <cit.> of a reticle). Third, it enables heterogeneous integration. Unlike logic circuits, analog circuit IPs do not significantly benefit from the performance and density improvements brought about by technological advancements <cit.>. Therefore, manufacturing logic circuits with advanced technologies while producing various IO-functional analog IPs with more outdated processes can save on the expensive manufacturing and design and IP-related costs associated with advanced technologies <cit.>. Fourth, the ability to repurpose a single chiplet for developing multiple computational chips, each differing in scale or targeted application, is a significant advantage of chiplet technology. This approach can substantially reduce the enormous Non-Recurring Engineering (NRE) costs and the time traditionally associated with developing different chips for each scale and scenario. A notable success story in this regard is AMD's Zen series CPUs <cit.>. While the advantages of chiplet technology are not the main focus of our research, given our primary emphasis on the architecture of an accelerator designed for a specific scenario, we do include a brief case study on this topic in Sec. <ref> for comprehensive understanding.Fig.<ref> also depicts the fourfold disadvantages of chiplets, all resulting from D2D interfaces. First, they increase energy consumption, as D2D links consume several to dozens of times more energy than the less than 0.1 pJ/bit data transfer cost with on-chip lines<cit.>. Second, limited inter-chiplet communication bandwidth may decrease performance. Compared to sufficient on-chip interconnect resources, interconnects between chiplets have limited bandwidth due to the limited number of IO pins available around each chiplet. Third, D2D interfaces require more area, as they need a specific analog PHY and controller, unlike on-chip lines, which occupy almost no silicon area. Fourth, the need for massive interconnections between chiplets increases packaging substrate costs, as advanced packaging scenarios require organic substrates with dozens of layers or silicon interposers, compared to the basic fan-out substrates sufficient for monolithic chips.All the trade-offs above influence three facets of a chip: energy consumption, performance, and MC. It's particularly important to emphasize that in the chiplet era, solely considering the silicon chip area is inadequate for evaluating a chip's MC. Factors such as yield and packaging costs also need to be taken into account. Consequently, designing an efficient DNN chiplet accelerator requires carefully balancing among these trade-offs. However, existing works fall short in considering all these trade-offs simultaneously. Some works <cit.> focus on studying the MC of chiplet-based chips, but do not consider their architectural details and the corresponding influence on performance and energy consumption. NN-baton <cit.> focuses on optimizing energy consumption and performance for small-scale DNN chiplet accelerators but does not consider the trade-offs on MC. Furthermore, NN-baton's ring-based template and layer-sequential mapping strategies limit its applicability to small-scale accelerators and restrict its scalability.The significant potential and intricate trade-offs of applying chiplet technology to DNN inference accelerators, alongside the current research void, inspire us to create a framework that co-explores architecture and mapping for these accelerators. §.§ Mapping Challenges While chiplet technology facilitates the construction of larger-scale accelerators, translating theoretical computing power into actual computing performance presents an escalating challenge. A significant portion of existing research is focused on optimizing layer-sequential (LS) mapping for small-scale accelerators <cit.>.However, with the increase in accelerator scale, LS mapping demonstrates limited scalability <cit.>, whereas LP mapping exhibits a higher potential. For instance, Simba employs a naive LP mapping for some blocks of layers in ResNet-50, achieving approximately a 1.7× throughput improvement over its original LS approach.DNNs can be viewed as a Directed Acyclic Graph (DAG), where each layer is a node. In LP mapping, a graph (or subgraph) can be mapped onto a core array simultaneously, where different core groups compute different layers. The on-chip interconnect is responsible for transmitting the feature maps of layers that share dependencies. SPM determines which core specifically computes which part of which layers. As shown in Fig. <ref>, the top-left corner shows a DNN DAG containing two layers. In LS mapping, all six cores are used to compute each layer one by one, whereas in LP mapping, some of the six cores are used to compute the first layer, and the remaining cores are used for the second layer. The feature maps between the two layers can be transferred via the on-chip network without the need to access DRAM.While LP mapping presents significant potential, it is less extensively researched than LS mapping. Most existing studies employ techniques such as fine-grained pipelining <cit.> and temporal merging of some layers or workloads <cit.> to reduce the imbalance across different pipeline stages and mitigate filling and draining overheads in LP. However, with regard to the SPM part, these studies do not offer specific optimizations and directly adopt heuristic stripe-based SPM strategies, which assign each layer to a consecutive and rectangle-shaped group of cores. The problem and its corresponding optimization space for LP SPM have not been clearly defined, let alone thoroughly explored or understood. It is worth noting that as most existing works also have an SPM optimization stage, the optimization method proposed in this work can easily integrate with existing methodologies.Another significant challenge posed to SPM is mitigating the adverse effects on energy consumption and performance introduced by chiplet technology (Fig. <ref>). Fundamentally, these negative influences stem from the integration of high-energy-cost and lower-bandwidth D2D links into the chip. Consequently, this challenge primarily revolves around reducing D2D communication automatically for various workloads, an aspect that is not considered and optimized by existing heuristic LP SPM strategies <cit.>. § SCALABLE HARDWARE TEMPLATE In this section, we introduce the architecture of our universal and configurable hardware template, which is created by extracting common features from existing chiplet accelerators <cit.> and large-scale accelerators <cit.>. Overall Architecture: As shown in Fig. <ref>(a), the proposed template comprises two distinct types of chiplets: IO chiplets and Computing chiplets. A mesh NoC interconnects all computing cores in all Computing chiplets and the controllers within IO chiplets, allowing for arbitrary core-to-core, core-to-DRAM, and DRAM-to-core communication. For inter-chiplet communication, the D2D transmission (TX) within a chiplet independently encodes the data and forwards it to the corresponding D2D reception (RX) in another chiplet. The D2D RX decodes the data and proceeds with NoC transmission. This inter-chiplet communication is fully automatic and transparent to both the source and the destination. This heterogeneous architecture allows for arbitrary numbers of IO and computing chiplets, ensuring the scalability of the entire template. Otherwise, if we equip each computing chiplet with IO-related PHY and controller would occupy the chip's edge area and IO pins, which could affect the routing of D2D links and, consequently, the system's scalability. Our hardware template and the corresponding hardware model in Gemini Framework (see Sec. <ref>) are not limited to supporting only mesh topology. They can be easily modified to support various topologies (demonstrated in Sec. <ref>). However, in order to ensure the signal integrity of the D2D links, there are limitations on the interconnect distance and cross-linking, which restricts certain topologies like torus and butterfly. Therefore, it is preferable to maintain a point-to-point parallel interconnect, as shown in Fig. <ref>. Considering these factors and the fact that most existing tiled accelerators <cit.> currently adopt mesh interconnect networks, we default to using the mesh topology in this paper. Computing Chiplet Architecture: Each Computing Chiplet contains an arbitrary number of computing cores interconnected by a mesh NoC. To enhance the scalability of this template, D2D interfaces are placed around the Chiplet, whose number is equal to the number of computing cores on each side. This arrangement enables the Computing Chiplet to form a larger-scale mesh with other Chiplets. In the example illustrated in Fig. <ref>(a), there are four cores on each side, so we place four D2Ds on each side of the Computing Chiplet.The computing cores are the key components responsible for performing calculations in the entire accelerator, and their architecture is shown in Fig. <ref>(b). The communication unit comprises DMA and router, facilitating communication with other cores and DRAM. The control unit is primarily responsible for managing computation tasks based on statically-compiled instructions and task progress information, as well as managing the reception and transmission of data or messages to and from external cores or DRAMs. The global buffer (GLB) of each core is globally visible in the entire accelerator. Every core can read data from or write data to the GLBs of other cores, provided the data is valid, or the address can be written to. The PE array and the vector unit are responsible for computing the General Matrix Multiply (GEMM)/Convolution (Conv) and vector/scalar operators, respectively. Each time, the PE array reads a workload tile's weight and input feature maps (ifmaps) from the GLB. The resulting ofmaps/partial sum (psum) can be directly written back to the GLB or post-processed within the vector unit (such as Batch Normalization and ReLU operations). Simultaneously, the vector unit can be independently invoked to compute vector/scalar operators.IO Chiplet Architecture: The IO Chiplet is equipped with an array of IO functionalities, enabling interactions with DRAM, host systems, or other input sources (e.g., cameras). All input data from host systems or alternative input sources are first loaded into DRAM, and then can be loaded and processed by computing cores. The DRAM controller is also connected to multiple routers within the entire mesh NoC to match the bandwidth of the DRAM and the network, ensuring the full utilization of DRAM bandwidth.Configurable Parameters: Our template features excellent configurability, offering a range of adjustable architectural parameters. These parameters include NoC bandwidth, D2D bandwidth, total DRAM bandwidth, the total number of cores in the X-direction (e.g., 8 in Fig. <ref>), the total number of cores in the Y-direction (e.g., 8 in Fig. <ref>), the number of chiplet divisions in the X-direction (X_Cut)(e.g., 2 in Fig. <ref>), the number of chiplet divisions in the Y-direction (Y_Cut)(e.g., 2 in Fig. <ref>), the number of MACs in the PE array within a single core, and the size of the GLB per core. It is worth mentioning that the microarchitecture of the PE array and its corresponding dataflow have been extensively studied in existing works <cit.>. Therefore, in this work, the PE array adopts the classic NVDLA architecture <cit.> and corresponding dataflow to maintain a fair comparison with the baseline, Simba. Of course, NVDLA can also be replaced by other microarchitectures with different dataflows, which are supported by our template. Based on this highly configurable template, we have developed matching delay, energy consumption, and MC evaluators as introduced in Sections <ref> and <ref>, which enable our comprehensive design space exploration.§ LP SPATIAL MAPPING ENCODING§.§ Encoding Format and Parsing MethodsIn this section, we introduce a layer-centric encoding method for describing LP SPM schemes. At a high-level, the encoded LP SPM scheme encapsulates two key aspects of information: (1) the allocation and partition of each layer to specific cores for computation, and (2) the data sources and destinations for the workload on each core. This encoding method exhibits great generality and can adapt seamlessly to diverse NoC topologies and core microarchitectures (demonstrated in Sec. <ref>). Subsequently, we provide a comprehensive description of our encoding format and the corresponding parsing method.Consider an N-layer DNN DAG that needs to be spatially mapped onto an accelerator in an LP manner. This accelerator features a core group CG comprising M cores and D DRAMs. The layers in the DAG form a Layer Group (LG). For the example in Fig. <ref>, the DAG consists of two Convs (Layer_1 and Layer_2), forming a layer group with these two layers. The accelerator in this example has a core group containing six cores and two DRAMs. In our encoding format, an LP Spatial Mapping Scheme (LMS) of a layer group consists of the Mapping Scheme (MS) for each layer within it. The MS of layer i has three attributes: Partition (Part_i=(H_i, W_i, B_i, K_i)), Core Group (CG_i=(C_id_i,1,C_id_i,2, ..., C_id_i,nc_i-1), where nc_i is the number of cores in CG_i), and Flow of Data (FD_i=(IF_i, WGT_i, OF_i), -1≤ IF_i, WGT_i, OF_i≤ D). The left side of Fig. <ref> shows the LMS of a layer group containing two layers and the MS of each layer.Part_i partitions layer_i along each dimension of the four-dimensional output cube into approximately equal nc_i parts for each core in CG_i. As shown in Fig.<ref>, the four dimensions are Batch (B), representing the number of samples processed at a pipeline stage; ofmaps channel, which is also the weight kernel (K); ofmaps Height (H); and ofmaps Width (W). B_i, K_i, H_i, and W_i represent the partition of the corresponding dimension. Based on this ofmaps partition scheme, the partition schemes for ifmaps and weights can be uniquely determined based on the features of different types of layers. For the example in Fig.<ref>, layer_1 is a Conv, with B and K equal to 2 and 2, respectively (other dimensions are not partitioned in this example). The example shows how Part_1 partitions layer_1's ofmaps (on the left of the arrow) and how the partition scheme of the ofmaps deduces the corresponding ifmaps and weight partition schemes (on the right of the arrow).CG_i contains the cores dedicated to computing layer_i. CG_i is ordered ((C_1, C_2) ≠ (C_2, C_1)). Each core of it can be an arbitrary core in CG. As shown in Fig. <ref>, CG_1 is (2,1,5,4).We then establish a Correspondence Rule to map each partitioned workload (PW) of layer_i to a corresponding core in CG_i. First, we assign each PW a unique 4-dimension ID based on its location in the partitioned ofmaps cube (PW = (h,w,b,k), h ∈ [0,H_i), w ∈ [0,W_i), b ∈ [0,B_i), k ∈ [0,K_i). Then based on the 4-dimension ID, we transform it into a numerical ID, which equals to h× W_i× B_i× K_i+w× B_i× K_i +b× K_i+k. The numerical ID (NID) of each partitioned workload corresponds to the (NID+1)th core in CG_i it is assigned to. For example, in Fig. <ref>, the top left subfigure of the four is layer_1's first part of both batch and ofmaps channel, and H and W of layer_1 are not partitioned; thus, its four-dimension and numerical ID are (0, 0, 0, 0) and 0, respectively. It is mapped to the first core (C_2) in CG_1.FD_i represents the data sources of layer_i's ifmaps (IF_i) and weights (WGT_i), and the destination of layer_i's ofmaps (OF_i). The flow of data can be categorized into two types: those that require explicit management (non-negative values) and those that do not require explicit management or are directly absent (-1). The scenarios that require explicit management are as follows: (1) For ofmaps, when the subsequent layer is not in the same layer group as the current layer or when the output of the current layer is the output of the entire DNN, it is necessary to explicitly manage the temporary storage of this layer's output to a specific DRAM. (2) For ifmaps, explicit management is only required when the input of the current layer is the input of the entire DNN; otherwise, the data can be fetched from the DRAM where the previous layer's ofmaps were stored. (3) For weights, explicit management is required whenever a layer has weights. Hence, the primary concern for explicitly managing data flow is determining from which DRAM to store or fetch data. In cases where the value is greater than 0, it represents the ID of the DRAM. Meanwhile, 0 represents a special case - interleaving, where we evenly distribute data across all DRAMs to fully and evenly utilize the available bandwidth. For example, as depicted in Fig. <ref>, due to layer_1 having the input of the whole DNN and weights, IF_1 and WGT_1 should be non-negative. FD_1 of layer_1 in the example is (1,1,-1), indicating that the ifmaps and weights of layer_1 originate from the DRAM 1. If both layers sharing dependency are in the same layer group, the ofmaps of the previous layer and the ifmaps of the subsequent layer need not be explicitly manipulated because the destination of each part of the previous layer and the source of each part of the subsequent layer can be directly inferred based on the partition schemes and CGs of these layers. Moreover, for the layers without weights, their WGT_i value would be -1. For example, in Fig. <ref>, based on Part_1, CG_1, Part_2, and CG_2, the data communication dependency between cores of layer_1 and layer_2 can be deduced directly. Thus, OF_1 and IF_2 are -1. §.§ Space CalculationAs shown in Fig. <ref>, each LMS is a point in the optimization space defined by our encoding method. The optimization space of mapping N layers onto an accelerator with a core group containing M cores and D DRAMs is considerably large and extremely complex to calculate. Therefore, we conservatively approximate its lower bound size at m!∑_i=0^N-1NiM-N-1N-i-1 4^N-i schemes, where xy is the binomial coefficient, which equals x!/y!(x-y)!. As a comparison, the upper bound of optimization space of the SOTA heuristic Tangram is N· part(M), wherePart_M represents the total number of possible factorizations for a given integer M. We can observe that the optimization space defined by our encoding method is significantly larger than the optimization space of the Tangram heuristic. The detailed calculation procedure and tables of the optimization space under different M and N for our method and Tangram can be found at this link<cit.>. §.§ Unveil Hidden Optimization OpportunitiesIn this section, we will introduce the general optimization opportunities for multi-core DNN accelerators hidden within this space, as well as how these opportunities become even more significant in chiplet scenarios. First, different Part attributes of each layer impact two aspects: (1) NoC communication volume: Various partition schemes lead to different data requirements for each core, causing disparities in NoC communication volumes, even with multicast capabilities. For instance, in Fig. <ref>, under Part_1, each core requires half of the ifmaps and weight. However, if the Part is changed to (1,1,1,4), each core needs the entire ifmaps and only 1/4 weights. (2) Intra-core optimization space: The dimension of the partitioned workload generated by distinct partition schemes differ, subsequently affecting the optimal intra-core dataflow scheme.Secondly, the number and position of cores can vary in each CG attribute. The number of cores affects the computation time of each layer, which in turn influences the overall pipeline computation time, as the slowest stage can stall the pipeline. Core positions can significantly impact the data transmission volume and congestion levels of the NoC.Third, different FD attributes affect the bandwidth utilization and access patterns of different DRAMs, and NoC communication. As shown in Fig. <ref>, the ifmaps and weights of layer_1 are read from DRAM1, while the weight and ofmaps of layer_2 are read from and written into DRAM2, respectively. In this case, the bandwidth demand and access patterns for each DRAM are not balanced, but the total NoC hops are relatively few. If all positive values in FD_1 and FD_2 change to 0 (interleaved), DRAM bandwidth usage will become more balanced. However, the total NoC hops increase since some data will interact with more remote DRAM. § GEMINI FRAMEWORK§.§ Gemini OverviewGemini is a mapping and architecture co-exploration framework for DNN inference chiplet accelerators. As depicted in the left of Fig.<ref>, the inputs for Gemini consist of (1) Architecture parameter candidates: Each configurable architecture parameter (introduced in Sec.<ref>) is assigned a list of candidate values; (2) Framework settings: These include the optimization goals and constraints, hyperparameters, and other relevant settings; (3) DNN models: Considering that an accelerator is often used to accelerate different DNNs under different scenarios, Gemini supports conducting DSE for n DNNs.All architectural candidates are exhaustively explored with the optimization objective of MC^α× E^β× D^γ, where MC, E, and D denote Monetary Cost, Energy Consumption, and Delay, and α, β, and γ denote the respective importance of MC, E, and D (performance). Note that the delay associated with small and large batch sizes can be utilized to illustrate performance in latency-sensitive and throughput-sensitive scenarios, respectively. The E and D are not only influenced by the architecture but also by the specific DNN workloads and their corresponding mapping strategies. Thus, the Mapping Engine employs a dynamic-programming-based graph partition algorithm and an Simulated-Annealing-based (SA-based) LP SPM exploration algorithm to optimize the mapping of the ith DNN onto the architectural candidate. SA algorithm is a widely-used optimization algorithm <cit.>. This optimization process utilizes E_i^β× D_i^γ as the optimization goal and yields the evaluation of E_i and D_i for processing the ith DNN with the given architecture parameters. Consequently, the overall Energy Consumption and Delay of the architectural candidate are determined by (∏_i=1^n E_i)^1/n and (∏_i=1^n D_i)^1/n, respectively. In contrast, the Monetary Cost remains unaffected by workloads and mapping strategy, enabling the MC Evaluator to directly assess it based on the architecture parameters. §.§ Mapping Engine As shown in the right part of Fig. <ref>, each DNN description file is first processed by the model parser, which generates a DNN topology graph and extracts the features of each layer. Then, this information is sent to the graph partition engine, which partitions the DNN graph into layer groups. Based on the explored graph partition scheme, the LP SPM exploration engine employs an SA-based algorithm with five specifically-designed operators to explore the LP SPM optimization space defined in Sec. <ref> for each layer group. Since the graph partitioning problem has been extensively studied <cit.>, we employ the same DP-based graph partition algorithm as Tangram <cit.>, which is also our baseline, to ensure a fair comparison in our experiments. This algorithm not only generates layer groups efficiently but also determines the number of samples (batch unit) processed in each pipeline stage <cit.>. §.§.§ LP SPM Exploration EngineIn this engine, Gemini employs an SA-based algorithm with five specifically-designed operators to explore the broad space defined in Sec. <ref> based on the graph partition scheme explored before. For each layer group, the initial LP SPM scheme is obtained using a widely adopted heuristic stripe-based strategy <cit.>. Then, the SA iteration starts, and the iteration loop is represented by the red arrow in Fig.<ref>. In each iteration, the SA controller randomly selects a layer group with a probability distribution proportional to their optimization size as calculated in Sec. <ref>. It then randomly selects one of the five operators to apply a transformation to the chosen layer group. Following this, the LP SPM Analyzer analyzes the modified scheme as introduced in Sec. <ref>. After analyzing the partition attribute of each layer, the partitioned workload will be scheduled in intra-core exploration engine, which performs exhaustive search optimization for tiling and loop reorder like many existing works <cit.>. Once the optimal solutions of intra-core dataflow for all partitioned layers are found, they will be sent along with the other analyzed information to the Evaluator for overall assessment. If the overall cost is lower, the change is accepted; otherwise, it is accepted with a probability which decreases as the number of iterations increases. SA Operators: We develop five SA operators to facilitate the exploration process in SA. The operators are as follows:OP1: Randomly select a layer and change the values in its Part, while still satisfying the constraints of Part_i.OP2: Randomly select a layer and randomly swap two cores within its CG, which is equivalent to exchanging the workload between these two cores for a single layer randomly.OP3: Randomly select two layers and swap two cores within their CGs, which is equivalent to exchanging the workload between these two cores for two layers.OP4: Randomly select two layers, remove a core randomly from the CG of one layer, and add it to the CG of the other layer. After the operation, update the Parts of both layers randomly to match their new CG sizes.OP5: Randomly select a layer, then choose a non-negative item in its FD randomly, and update its value within the range of 0 to the number of DRAMs randomly.Utilizing these operators allows each attribute to transition into any other state that fulfills the corresponding constraints through a sequence of transformations. For instance, the size of CG_1 in Fig. <ref> can be modified to any number from 1 to 5 through a series of OP4 operations. Crucially, this ensures that any point within the LP SPM optimization space can be reached from any other (demonstration link <cit.>), thereby guaranteeing comprehensive exploration by the SA algorithm, and yielding near-optimal solutions.Through our SA-based algorithm combined with our specially designed operators, Gemini not only can explore the optimization space to balance the trade-offs introduced in Sec <ref>, but also automatically optimize D2D link communication. Since D2D links tend to have smaller bandwidth and higher energy consumption, during the iterative process, if an SA operation increases the use of more D2D links, it is more likely to significantly reduce performance and energy efficiency, making it less likely to be accepted. On the other hand, SA operations that reduce D2D link usage are more likely to be accepted. Therefore, the entire search process inherently optimizes D2D communication, which will be demonstrated in Sec. <ref>. Furthermore, this mapping technique can aid in architecture design by enabling accelerators to be equipped with lower D2D bandwidth, reducing their area overhead and reaping the benefits of chiplet-induced yield improvement while incurring only minimal performance and energy efficiency losses.§.§.§ EvaluatorThe Evaluator in the Gemini framework, which is modified based on SET <cit.>, has two interfaces, one for intra-core evaluation and the other for global evaluation. When called by the intra-core exploration engine, the Evaluator can calculate the number of operations for each part, such as the memory access times for different-level buffers and the number of different-precision multiply-accumulate (MAC) operations. Based on this information, the total energy consumption can be calculated by summing up the number of operations for each component multiplied by the corresponding unit energy consumption. The overall computation time for the workload on the core can be determined by taking the maximum value among the MAC computation time and the data access amounts for each memory level divided by their respective access bandwidths.Based on the LP SPM scheme analyzed by the Analyzer and the explored intra-core scheduling scheme for each partitioned layer, the Evaluator conduct the global evaluation by analyzing the data communication volume on each on-chip network link and D2D link, access patterns of DRAM, memory access times for different-level buffers within each core, and operation counts for various-precision computation units. Subsequently, a simulator assesses the delay of the DNN with specific batch size on this accelerator. The energy consumption can be calculated by summing up the number of operations for each component in the accelerator, multiplied by their respective unit energy consumption. It is worth mentioning that the energy consumption of NoC routers is predominantly attributed to the input buffer and crossbar components. Therefore, the per-flit energy consumption of NoC routers does not vary significantly across different traffic patterns and can be considered a constant value <cit.>. The energy consumption calculation for D2D links can be categorized into two types. The first type is for the clock-embedded D2D, where the clock signal does not have a dedicated channel and needs to be recovered from the data. A typical example is SerDes <cit.>. For this type of D2D, almost the same amount of power is consumed regardless of whether data is being transmitted. Therefore, its energy consumption is calculated as Number of D2Ds × Power per D2D × Latency. The second type is the clock-forwarding D2D, which has a separate clock channel, such as GRS <cit.>, and UCIe <cit.>. This type of D2D can enter a low-power state when not transmitting data. Hence, its energy consumption is calculated as D2D Communication Volume × Unit D2D Communication Energy Consumption, similar to on-chip networks. Since the baseline of this work is Simba with GRS links, the second model is also the default model in the experimental to guarantee a fair comparison.§.§ Monetary Cost Evaluator MC Evaluator can assess the production cost of different architecture candidates, which mainly includes chiplet silicon cost, DRAM cost, and packaging cost. The evaluation of the silicon area for all chiplets (Area_tot = ∑_i=1^n Area_Die_i) serves as a fundamental cornerstone within the MC Evaluator. The chip silicon cost and packaging cost are directly influenced by this area evaluation, as shown in the following paragraphs. Each chiplet's area is determined by the summation of the areas of its constituent modules. In this work, the area of analog IPs, such as PCIe PHY, DDR PHY, and D2D PHY, is obtained directly from the corresponding datasheets. For the logic modules, their area estimation is based on the Verilog code and evaluation process employed during the development of our chip.The silicon cost of a die (chiplet) is Area_Die_i / Yield_die_i· C_silicon, where Yield_Die_i = Yield_unit^Area_Die_i/Area_Die_unit <cit.>. Yield_unit represents the yield per unit area (Area_Die_unit), where Yield_unit under 12nm in this paper is assumed to be 0.9, and Area_Die_unit is set to 40 mm^2.The DRAM cost is ⌈ DRAM_bw/Unit_bw⌉· C_DRAM_Die, where Unit_bw and C_DRAM_Die represent the bandwidth provided by each DRAM die and the MC of each DRAM die, respectively. The Unit_bw and C_DRAM_Die used in this work (GDDR6) are 32GB/s and $3.5 <cit.>. The packaging cost equals (Area_tot· f_scale)/Yield_package· C_package. Because the substrate requires a larger area than the total area of all chiplets to accommodate IO fanout and interconnect wiring functions, there is an empirical scaling factor f_scale to calculate the substrate area based on total silicon area <cit.>. C_package represents the monetary cost per unit area of the substrate. The C_package for different substrates varies. For instance, in the case of an organic substrate, without adopting chiplet technology, a standard fan-out substrate is relatively inexpensive (0.005$/mm^2). However, when chiplet technology is employed, the price increases due to the need to support high-density interconnects. C_package varies across different area ranges. Larger substrate areas require more intricate manufacturing processes, resulting in higher C_package. §.§ Future Work Grounded on GeminiThe field of DNN chiplet accelerators is still emerging, with limited research available. Gemini offers a fertile ground for further exploration. In this section, we outline examples of promising future work in the realms of mapping and architectural design that can be pursued using Gemini. Our intention is to underscore the value of Gemini while also fostering community development.At the mapping level, a promising direction is to co-explore the SPM optimization dimension with other optimization dimensions, such as the graph-level composite spatial-temporal dimension defined by SET. Jointly exploring these dimensions could potentially yield better solutions, and it would also be valuable to investigate the interplay between these different optimization dimensions On the architectural front, the heterogeneity of chiplet presents another compelling area for research. Questions around scheduling LP mapping on heterogeneous chiplets and, reciprocally, exploring architectural designs for heterogeneous accelerators in the context of LP mapping are of particular interest. § EVALUATION§.§ Experiment Setup §.§.§ DSE ConfigurationsIn this work, we conduct DSE on DNN accelerators with 72 TOPs, 128 TOPs, and 512 TOPs to show Gemini advantages and gain a deeper understanding of the design space for DNN chiplet accelerators and the use of chiplet technology. We choose 72 TOPs instead of 64 TOPs to enable comparison with the SOTA Simba architecture with 72 TOPs <cit.>. Table <ref> presents the DSE parameters. By keeping the total computing power constant, we determine the number of cores based on the MAC/Core configurations. To maintain the core array's length and width as close as possible, we arrange the cores accordingly. For instance, with 36 cores, we configure them in a 6×6 arrangement, while for 18 cores, we adopt a 6×3 configuration. The values of X_Cut and Y_Cut must be a factor of the corresponding number of cores on edge; otherwise, the candidate is deemed invalid. We choose TSMC 12nm as the default process, which is also widely used by many commercial accelerators <cit.>. We choose organic substrate as the default packaging substrate, as it is used in many successful commercial <cit.> and academic designs <cit.>. The default operating frequency is set to 1GHz. For D2D, we use GRS <cit.> as default in experiments to guarantee a fair comparison with Simba.For the sake of brevity, the explored architectural parameters are presented as (Chiplet Number, Core Number, DRAM_BW,NoC_BW, D2D_BW, GBUF/Core, MAC/Core).The DSE adopts a batch size of 64, targeting a throughput-driven scenario, as defined by the MLPerf benchmark <cit.>. Although the exploration process solely utilizes these two DNN models with a batch size of 64, we also evaluate additional DNNs with batch size 1 (latency-centric scenarios) on the explored architecture, demonstrating its broad applicability to other scenarios. We set the default optimization objective of DSE as MC· E· D (as introduced in Sec. <ref>). The default workload is set as Transformer <cit.> since it is prevalent and widely used in various scenarios such as image <cit.> and language processing <cit.>. §.§.§ DSE TimeThe DSE time increases with the increase in target computing power. For example, the DSE for 72 TOPs and 512 TOPs use 80 threads and 100 threads, respectively, and run for 2280s and 23907s on an Intel Xeon Platinum 8260 server. §.§.§ WorkloadsTo fully demonstrate the advantages of our architecture and mapping co-exploration, we conduct the comparison with the baseline over a wide range of DNNs, including ResNet-50 (RN-50) <cit.>, ResNeXt (RNX) <cit.>, Inception-ResNet (IRes) <cit.>, PNasNet (PNas) <cit.>, and Transformer (TF) <cit.>. ResNet-50 and ResNeXt are selected due to their use of classic residual structures, which are prevalent in many DNNs. Inception-ResNet-v1 and PNASNet represent DNNs with more intricate dependencies. Due to the reasons mentioned above, we mainly use Transformer as a representative to showcase various observations and insights, in addition to overall comparisons.§.§.§ BaselineThe baseline here principally comprises two components: baseline architecture and baseline mapping. For the architecture baseline, we adopt Simba <cit.> and optimize it accordingly. Since the Simba accelerator is a test chip lacking DRAM, the configuration of its on-chip buffer is also impacted. Consequently, we initially equip it with I/O Dies and furnish a total DRAM bandwidth of 2 GB/s per TOPs. Concurrently, the on-chip buffer configuration is determined according to the Simba-series paper <cit.>, which explores the architecture of an NVDLA-style core. In particular, the GBUF is allocated 1024 KB per core. The remaining configurations align with the specifications in the original Simba paper <cit.>. In experiments, the baseline architecture is abbreviated as S-Arch, while the architecture explored by our Gemini framework is denoted as G-Arch. Additionally, to demonstrate the universality of Gemini, we conduct an exploration with a hardware template modified to adopt a folded torus NoC topology. We compare the explored architecture and mapping scheme with an accelerator that employs the Tenstorrent Grayskull architecture parameters <cit.> (T-Arch) and utilizes Tangram mapping. This comparison serves to validate the effectiveness of the Gemini (Sec. <ref>).Although Simba has its own layer-sequential mapping <cit.>, this mapping performs poorly on such a high-computing-power chip <cit.>. Therefore, we choose the SOTA Tangram LP mapping strategy as the baseline mapping strategy (abbr., T-Map). The mapping scheme explored by Gemini framework is abbreviated as G-Map. §.§ Overall Comparison§.§.§ Compared to S-Arch with T-MapAs illustrated in Fig. <ref>, G-Arch mapped with G-Map achieves a 46.8% reduction in delay and a 28.8% reduction in energy consumption across five DNNs and two batch sizes when compared to S-Arch mapped with T-Map. This improvement is attained with only a 14.3% additional MC, demonstrating the value of Gemini co-exploration. Moreover, by merely employing G-Map on the S-Arch, a significant reduction delay and energy consumption can be achieved compared to T-Map. The optimization brought up by Gemini mapping demonstrates that exploring the vast space defined by our encoding method can indeed achieve a better balance of the trade-offs introduced in Sec. <ref>. The explored G-Arch is (2, 36, 144GB/s, 32GB/s, 16GB/s, 2MB, 1024). Compared to S-Arch, the number of Chiplets in G-Arch is significantly reduced, while the bandwidths of NoC and D2D are increased, and the GBUF capacity is doubled. The reduction in the number of Chiplets leads to a significant decrease in the proportion of D2D links across all links, resulting in a substantial reduction in network energy. With improved interconnect bandwidth and on-chip storage resources, performance and energy efficiency improvements are natural. However, the interesting point is that these resource improvements only result in a 14.3% increase in MC. This is mainly because, under S-Arch, there are too many Chiplets, and an excessive amount of chip area is used for D2D interfaces (nearly 40%). In G-Arch, this area is converted into interconnect bandwidth and on-chip storage resources. However, it is worth noting that Simba serves as an academic test chip. As a pioneering work in introducing Chiplets into DNN accelerators, Simba has already showcased many features of Chiplet-based inference accelerators. Given that Simba is an academic demo with limited chiplet area and scale (6 mm^2 <cit.>), its suboptimal chiplet granularity is understandable. Further insights regarding the optimal choice of chiplet granularity will be discussed in Section <ref>.§.§.§ Compared to T-Arch with T-MapWe compare the architecture and mapping scheme explored by Gemini with a 120-core monolithic accelerator that utilizes the same architectural parameters as Tenstorrent Grayskull (Core array size, MAC & GBUF per core, and DRAM bandwidth) <cit.> and employs Tangram mapping. Gemini's explored architecture is (6, 60, 480 GB/s, 64GB/s, 32GB/s, 2MB, 2048). Compare with T-Arch with T-Map, G-Arch with G-Map achieves 1.74 × performance and 1.13 × energy efficiency, and reduces 40.1% MC, which demonstrates the effects and universality of Gemini. § DISCUSSION §.§ Design Space Exploration §.§.§ Chiplet GranularityAs illustrated in Fig. <ref>(a), the optimal number of chiplets for 128 TOPs and 512 TOPs, under the four different objectives, ranges from 1 to 4 and 2 to 4, respectively. Moreover, it is evident from both DSEs that an excessively fine-grained chiplet granularity negatively impacts MC, energy consumption, and performance. Combining it with the analysis in comparing G-Arch and S-Arch in Sec. <ref>, we can achieve an insight: Partitioning DNN accelerators into chiplets with moderate granularity can effectively strike a balance among MC, energy consumption, and performance. Conversely, an excessively fine-grained approach to chiplet partitioning can have a negative impact on all three metrics—performance, energy consumption, and MC—simultaneously.§.§.§ Core Granularity In addition to the granularity of chiplets, the granularity of cores is also an important topic in many-core architectures. Particularly in the context of LP mapping, the trade-offs associated with varying core granularities have yet to be explored. This section aims to address this gap in the literature. As shown in Fig. <ref>(b), there is a clear trend of increasing MC as the number of cores increases. This is primarily because, to improve overall performance and energy efficiency, it is necessary to increase network bandwidth or buffer capacity per core. The more cores there are, the more these resources multiply, leading to greater area overhead. However, from the perspective of EDP, it initially decreases with an increase in the number of cores, then rises again later. For example, in the 128 TOPs scenario, the EDP for the yellow and green points is better than that for the red and blue points. This intriguing phenomenon warrants further analysis to understand the underlying trade-offs. A main advantage of LP Mapping is to reduce DRAM accesses, which is a key factor contributing to improvements in both energy consumption and performance. Under LP Mapping, an increased number of cores facilitates longer pipelines, allowing for the simultaneous processing of more layers. This, in turn, enables more dependency data to be transferred and processed on-chip, thereby reducing costly DRAM accesses. However, not all dependencies have the same amount of data. Layers involving links with larger data volumes will be prioritized for simultaneous processing. Therefore, the reduction in DRAM access due to the increase in the number of layers processed simultaneously diminishes at a slower rate. Moreover, as the pipeline depth increases, so does the utilization loss due to the filling and draining phases <cit.>. In summary, a longer pipeline is not necessarily better. The diminishing benefits from extending the pipeline and the escalating costs eventually reach a point of equilibrium, which represents the optimal pipeline length. As evidence, in Fig. <ref>, the number of cores for the optimal solutions under the four different objectives, from left to right, are 16, 8, 32, and 32. Correspondingly, the average number of layers processed simultaneously for each are 5.4, 4.1, 10.2, and 8.1. As shown in Fig. <ref> Left, though DRAM access continuously decreases as the number of cores increases from 8 to 32, the rate of reduction is much faster when transitioning from 8 cores to 16 cores (48.0%) compared to the transition from 16 to 32 cores (average 19.4%). Moreover, the average number of layers processed simultaneously by the optimal 64-core architecture (the best among all 64-core configurations under the target of MC· E· D) is 9 layers, which does not continue to increase with the number of cores, indicating the striking of the balance.§.§ Reuse A Single Chiplet for Multiple Accelerators One significant advantage of utilizing chiplets in the industry lies in their reusability across accelerators of varying scales. This approach has already been successfully employed in CPUs <cit.>. Despite its promise, this potential benefit has yet to be explored within the realm of DNN inference accelerators. This section aims to bridge this gap using Gemini Framework.Figure <ref> reveals that the performance of accelerators constructed with Simba chiplets is poor for both 128 TOPs and 512 TOPs configurations, with the latter faring even worse. This example illustrates the failure of a “one-size-fits-all” approach, highlighting the impracticality of designing a small-scale chiplet to cover an extensively wide range of scenarios. Additionally, although using chiplets from the best 128 TOPs accelerator in the 512 TOPs accelerator yields better results than Simba, the overall performance is still unsatisfactory, and vice versa. This example, in conjunction with the Simba case, collectively illustrates that directly repurposing chiplets from one computing platform to build another is ill-advised and frequently results in less-than-ideal outcomes. To fully harness the potential of “Chiplet Reuse for Multiple Accelerators”, we have enhanced Gemini to enable DSE for multiple computational power levels concurrently. In this approach, Gemini strategically organizes the chiplets of each architecture candidate with the lowest computational power into accelerators designed for higher computational power requirements. Subsequently, the product of MC· E· D for all accelerators is calculated, and the architecture yielding the minimum product is chosen as the optimal solution (Joint Optimal in Fig. <ref>).By comparing the Joint Optimal with the individual Optimal, we find that although the Joint Optimal still has a certain gap (with MC· E· D being on average 34% higher), this gap is much smaller than those of the previous two approaches. This modest overhead is acceptable when considering that chiplet reuse can significantly reduce NRE costs, including one-time expenses such as design, verification, IP acquisition, and tape-out. The benefits of this approach become increasingly significant in advanced process nodes or fragmented markets since NRE costs tend to grow non-linearly with process advancement, while the fragmented market—with its smaller volumes for each market—makes it challenging to amortize expensive NRE costs effectively. Thus, it concludes that with the proper design considerations—facilitated by the support of Gemini—the idea of employing a single chiplet for multiple accelerators can be effectively applied to DNN inference accelerators. §.§ Learn From an Actual SPM Example In this section, we demonstrate the advantages of Gemini mapping through a practical example (Fig. <ref>) and analyze how to allocate computing resources to each layer in a multi-core chiplet accelerator.As can be observed in Fig.<ref> bottom left, the data volume from layer_1 to layer_2 and from layer_2 to layer_3 is significantly higher than that of other dependencies. The links with the most data (reddest) in the Tangram SPM scheme are also caused by these two dependencies. For instance, since the data calculated in layer_2 needs to be sent to layer_3, under XY-routing, the data calculated by the left cores of layer_2 is first sent to the right and then downwards, as illustrated in the upper part of the mesh in Fig.<ref>. The accumulation of ofmaps data computed by multiple cores of layer_2 leads to a large amount of communication data on the upper on-chip links. Thus the links are red. During the downward transmission process, each core of layer_3 consumes a portion of the ofmaps data calculated by layer_2, causing the actual data volume to decrease gradually. This is reflected by the colour of the downward links becoming lighter (except for the D2D links). However, due to the smaller bandwidth of the D2D links compared to the on-chip links, the bandwidth pressure on the D2D links is higher. By automatically exploring the vast optimization space we have defined, Gemini mapping can discover solutions that are difficult to design manually. In detail, as shown in Gemini scheme in Fig. <ref>, we can see that the red and orange links have completely disappeared. This is mainly due to two factors: (1) the total hop count decreases by 34.2%, with a 74% reduction in hop count on the intermediate D2D links; and (2) Gemini can utilize the originally relatively idle links (blue) more effectively, which is reflected in the decreased number of blue links in the figure. As a result, the overall network traffic is more evenly distributed, significantly improving network performance and overall performance.From the actual example in Fig. <ref> and other similar instances that are not shown, we can derive an insight: the widely used clustered core allocation strategy <cit.>, where a layer is assigned to consecutive and rectangle-like core group, may not always be a good choice. This is because the dependencies between certain layers may have particularly high data transfer requirements, and clustering cores for each layer together cannot disperse this transfer demand, leading to network congestion.§ CONCLUSION In this work, we introduce Gemini, a mapping and architecture co-exploration framework for large-scale DNN chiplet accelerators, which considers MC, performance, and energy efficiency. For mapping, Gemini employs a novel encoding method to define LP spatial mapping schemes and the corresponding parsing method. This approach enables us to identify the optimization space of LP SPM and unearth hidden optimization opportunities. Additionally, we devise a specially-tailored SA algorithm to efficiently navigate this space. Regarding architecture, we provide a highly configurable hardware template and precise evaluators for performance, energy, and MC. Our experiments demonstrate that Gemini's explored architecture and mapping scheme significantly outperform the SOTA Simba architecture with Tangram mapping, with only a slight increase in monetary cost. We also present insightful observations about the use of chiplet technology in architecture design and DNN workload mapping within chiplet contexts.§ ACKNOWLEDGMENTThis research was partially supported by National Key R&D Program of China (2022YFB2804103), Tsinghua University Dushi Program, and Tsinghua University Talent Program, National Natural Science Foundation of China (62072262). IEEEtranSpapersize=8.5in,11in § ARTIFACT APPENDIX §.§ Abstract This appendix provides guidance on accessing and using the Gemini framework (introduced in Sec. <ref>) to replicate the key results shown in Fig. <ref>. The process is divided into two steps: (1) Conducting DSE for a 72 TOPs setup (Table. <ref>) to identify the optimal architecture, referred to as G-ARCH; (2) Comparing the efficacy of G-ARCH with G-MAP against the baseline architecture S-ARCH and baseline mapping T-MAP, across varying batch sizes (introduced in Sec. <ref>) and five different networks (introduced in Sec. <ref>). The remaining experiments, which also involve similar DSE and analysis, are omitted here for the sake of brevity. §.§ Artifact check-list (meta-information) * Algorithm: Simulated Annealing, Exhaustive Search and Dynamic Programming* Program: C++, Shell, Python (only for data collection)* Compilation: by Makefile * Hardware: It is best to have a server with more than 80 threads.* Metrics: Cost function (MC^α× E^β× D^γ, MC× E× D is employed in DSE. E× D is employed in the comparisons with baseline.) * Experiments: reproduce Fig. <ref>, including 72TOPs DSE and comparisons with baseline.* How much disk space required (approximately)?: 1GB* How much time is needed to prepare workflow(approximately)?: Several minutes at most.* How much time is needed to complete experiments (approximately)?: For DSE, it takes about 38 minutes with 80 threads; For comparison, it takes about 14 minutes using 10 threads. The server is Intel Xeon Platinum 8260 server* Publicly available?: Yes* Code licenses (if publicly available)?: BSD 3-Clause License * Archived (provide DOI)?: 10.5281/zenodo.10207613§.§ Description §.§.§ How to access The artifact is uploaded to Zenodo:10.5281/zenodo.10207613§.§.§ Software dependencies A C++ compilation environment with support for the C++ 17 standard is required. Linux is recommended. It is recommended to use “GNU make” to build the program. Additionally, two python packages “pandas” and “csv” are needed.§.§ Installation For artifact evaluation, start by downloading the artifact from Zenodo:basicstyle=,breaklines=true, [] wget -O GEMINI_AE.zip https://zenodo.org/records/10207613/files/GEMINI_AE.zip?download=1 unzip GEMINI_AE.zipOur Gemini exploration framework is in “GEMINI”. We use Makefile to build the GEMINI framework.[] cd GEMINI makeThen the executable target will be generated at “./build/stschedule”.You can install the needed python packages using pip with following commands. [] pip install -r requirements.txtOr you can use conda to install with following commands. [] conda install –file requirements.txt§.§ Experiment workflow §.§.§ 72TOPs DSEOnce GEMINI framework is built, you can execute DSE experiment and search the best arch with command below.[] ./dse.shThe DSE script uses 80 threads and takes around 38 minutes to run on Intel Xeon Platinum 8260 server. Then you can see the state of current running DSE. Once the process is completed, the command-line window will output the optimal architecture. You can compare this optimal architecture with the one mentioned in Sec. <ref>, which is (2, 36, 144GB/s, 32GB/s, 16GB/s, 2MB, 1024). In the "dse_log" folder, there will be a folder named with the timestamp, containing the outputs for each architecture candidate, as well as the summarized “result.csv” file. You can compare the generated "result.csv" with the “DSE_result.csv” provided in "expected_results" folder, which is employed in our paper.§.§.§ Comparison with BaselinesOnce you obtain the optimal architecture, you can proceed with the comparison of G-ARCH+G-MAP, S-ARCH+T-MAP, and S-ARCH+G-MAP for the five networks and two batch sizes to reproduce Fig. <ref> with following commands. [] ./compare.sh <output_dir>/best_arch.txtPlease note that “output_dir” refers to the address where the “best_arch.txt” file is stored, which is the folder named with a timestamp within the “DSE_log” directory like ./dse_log/2023_11_08_10_56_52/best_arch.txt. The script uses 10 threads and takes around 14 minutes to run on Intel Xeon Platinum 8260 server. Then you can see the output information in the terminal window about the optimization rate of G-ARCH+G-MAP over S-ARCH+T-MAP, which will be identical with the results posted in Abstract and Sec. <ref>(1.98× performance improvement, 1.41× energy efficiency improvement, and 14.3% MC increase).Then you can run the following command to reproduce the Excel data of Fig. <ref>. The Fig_5.csv file, generated in the current directory, can be validated through comparison with the Fig_5.csv file, which contains the data used in Fig. <ref>, located in the expected_results folder. [] python3 Fig5_reproduce.py <output_dir>/compare.csvPlease note that “output_dir” refers to the address where the “compare.csv” file is stored, which is the folder named with a timestamp within the `Compare_log` directory like ./compare_log/2023_11_08_10_56_52/compare.csv. §.§ Evaluation and expected resultsThere are two key data results: the optimal architecture discovered by DSE (2, 36, 144GB/s, 32GB/s, 16GB/s, 2MB, 1024) and the improvement ratio of G-ARCH+G-MAP compared to S-ARCH+T-MAP (1.98×performance improvement, 1.41×energy efficiency improvement, and 14.3% MC increase). The optimal architecture and improvement ratio will be displayed in the windows after executing dse.sh and compare.sh scripts respectively, and can be verified against the mentioned results.Moreover, the breakdown of energy consumption and delay data for Fig. <ref> can be verified by comparing the “Fig_5.csv” file generated in GEMINI folder with the “Fig_5.csv” located in the “expected_results” directory.§.§ Methodology Submission, reviewing and badging methodology: * <https://www.acm.org/publications/policies/artifact-review-badging> * <http://cTuning.org/ae/submission-20201122.html> * <http://cTuning.org/ae/reviewing-20201122.html> | http://arxiv.org/abs/2312.16436v1 | {
"authors": [
"Jingwei Cai",
"Zuotong Wu",
"Sen Peng",
"Yuchen Wei",
"Zhanhong Tan",
"Guiming Shi",
"Mingyu Gao",
"Kaisheng Ma"
],
"categories": [
"cs.AR"
],
"primary_category": "cs.AR",
"published": "20231227064506",
"title": "Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators"
} |
Multinomial Link Models Tianmeng Wang^1, Liping Tong^2, and Jie Yang^1 ^1University of Illinois at Chicago and ^2Advocate Aurora HealthJanuary 14, 2024 =======================================================================================================================We propose a unified multinomial link model for analyzing categorical responses. It not only covers the existing multinomial logistic models and their extensions as a special class, but also allows the observations with NA or Unknown responses to be incorporated as a special category in the data analysis. We provide explicit formulae for computing the likelihood gradient and Fisher information matrix, as well as detailed algorithms for finding the maximum likelihood estimates of the model parameters. Our algorithms solve the infeasibility issue of existing statistical software on estimating parameters of cumulative link models. The applications to real datasets show that the proposed multinomial link models can fit the data significantly better, and the corresponding data analysis may correct the misleading conclusions due to missing data. Key words and phrases: Categorical data analysis, Cumulative link model, Feasible parameter space, Missing not at random, Multinomial logistic model, Partial proportional odds § INTRODUCTIONWe consider experiments or observational studies with categorical responses, which naturally arise in many different scientific disciplines <cit.>. When the response is binary, generalized linear models have been commonly used <cit.> to analyze the data. When responses have three or more categories, multinomial logistic models have been widely used in the literature <cit.>, which cover four kinds of logit models,baseline-category, cumulative,adjacent-categories, and continuation-ratio logit models.Following the notations of <cit.>, there are d covariates and m≥ 2 distinct covariate settings 𝐱_i = (x_i1, …, x_id)^T, i=1, …, m. At the ith setting, n_i>0 categorical responses are collected i.i.d. from a discrete distribution with J categories, which are summarized into a multinomial response 𝐘_i=(Y_i1,⋯,Y_iJ)^T ∼ Multinomial(n_i; π_i1,⋯,π_iJ), where π_ij is the probability that the response falls into the jth category at the ith setting.Throughout this paper, we assume π_ij∈ (0,1) for all i=1, …, m and j=1, …, J.The four logit models with partial proportional odds (ppo, see <cit.>) can be written as follows:log(π_ij/π_iJ)= 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ ,log(π_i1+⋯ + π_ij/π_i,j+1+⋯ + π_iJ)= 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ ,log(π_ij/π_i,j+1)= 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ ,log(π_ij/π_i,j+1 + ⋯ + π_iJ)= 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ ,where i=1, …, m, j=1, …, J-1, 𝐡_j^T(·) = (h_j1(·), …, h_jp_j(·)) are known predictor functions associated with the parameters β_j = (β_j1, …, β_jp_j)^T for the jth response category, and 𝐡_c^T(·) = (h_1(·), …, h_p_c(·)) are known predictor functions associated with the parameters ζ = (ζ_1, …, ζ_p_c)^T that are common for all categories.As special cases, 𝐡_j^T(𝐱_i) ≡ 1 leads to proportional odds (po) models assuming the same parameters for different categories <cit.>, and 𝐡_c^T(𝐱_i) ≡ 0 leads to nonproportional odds (npo) models allowing all parameters to change across categories <cit.>. The corresponding expressions for po andnpo models could be found in the Supplementary Materials (Sections S.7 and S.8) of <cit.>. In the literature, the baseline-category logit model (<ref>) is also known as the (multiclass) logistic regression model <cit.>, which is commonly used for nominal responses that do not have a natural ordering <cit.>. Models (<ref>), (<ref>), and (<ref>) are typically used for ordinal or hierarchical responses with either a natural ordering or a hierarchical structure. According to <cit.>, however, even for nominal responses, one can use the Akaike Information Criterion (AIC, <cit.>) or Bayesian information criterion (BIC, <cit.>) to choose a working order of the response categories, treat the responses as ordinal ones, and apply models (<ref>), (<ref>), and (<ref>), which may significantly improve the prediction accuracy. The four logit models (<ref>), (<ref>), (<ref>), (<ref>) can be rewritten into a unified form <cit.>𝐂^Tlog(𝐋π̅_i)=𝐗_iθ,i=1,⋯,mwhere𝐂 is a J×(2J-1) constant matrix,𝐋 is a (2J-1)× J constant matrix depending on (<ref>), (<ref>), (<ref>), and (<ref>), π̅_i = (π_i1, …, π_iJ)^T, 𝐗_i is a J× p matrix depending on 𝐡_j^T(𝐱_i), j=1, …, J-1 and 𝐡_c^T(𝐱_i), p=p_1 + ⋯ + p_J-1 + p_c, and θ = (β_1^T, …, β_J-1^T, ζ^T)^T. Along another line in the literature, cumulative logit models (<ref>) have been extended to cumulative link models or ordinal regression models <cit.>. In our notations, the cumulative link models can be written asg(π_i1+⋯ + π_ij) = 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζwhere the link function g(ρ) could be logit, probit, log-log, complementary log-log, and cauchit (see Table <ref>). Note that the cumulative link model (<ref>) with logit link is the same as the cumulative logit model (<ref>).Baseline-category logit model (<ref>) has been extended with probit link, known as multinomial probit model <cit.>. In our notations, g(π_ij/π_ij+π_iJ) = 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζwhere the link function g(ρ) could be logit or probit. Model (<ref>) with logit link is the same as the baseline-catetory logit model (<ref>). Examples could be found in <cit.>.Continuation-ratio logit model (<ref>) has been extended with complementary log-log link by <cit.> for data analysis. In our notations, g(π_ij/π_ij+⋯ + π_iJ) = 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζwhere the link function g(ρ) could be logit or complementary log-log. Note that model (<ref>) with logit link is the same as the continuation-ratio logit model (<ref>).Given so many models have been proposed or extended for categorical responses, nevertheless, they are not flexible enough for many real applications (see Section <ref>). In this paper, we propose a unified categorical model, called the multinomial link model, which not only covers all the models mentioned above, but also allows more flexible model structures, mixed link functions, and response categories in multiple groups. More specifically, it allows the observations with NA or unknown responses to be incorporated as a special category and corrects misleading conclusions due to missing data. § MULTINOMIAL LINK MODELSInspired by the unified form (<ref>) of multinomial link models, in this section, we propose a more flexible multinomial model, called the multinomial link model, as well as its three special classes, mixed-link models allowing separate link functions for different categories, two-group models allowing multi-grouped response categories, and po-npo mixture models allowing more flexible model structures.§.§ Multinomial link models in a unified formIn matrix form, the multinomial link model can be written as𝐠(𝐋π_i/𝐑π_i + π_iJ𝐛)=η_i=𝐗_iθwhere 𝐠 = (g_1, …, g_J-1)^T is a vector of J-1 link functions, 𝐋 and 𝐑 are (J-1)× (J-1) constant matrices, 𝐛∈ℝ^J-1 is a constant vector, π_i = (π_i1, …, π_i,J-1)^T ∈ℝ^J-1, π_iJ = 1 -∑_j=1^J-1π_ij, η_i = (η_i1, …, η_i,J-1)^T ∈ℝ^J-1,𝐗_i = (𝐟_1(𝐱_i), …, 𝐟_J-1(𝐱_i))^T ∈ℝ^(J-1)× p with 𝐟_j(𝐱_i) = (f_j1(𝐱_i), …, f_jp(𝐱_i))^T, and the regression parameter vector θ=(θ_1, …, θ_p)^T consists of p unknown parameters in total. Note that the vector 𝐠 of link functions applies to the ratio of two vectorscomponent-wise, which can be denoted as 𝐠( (𝐋π_i) ⊘ (𝐑π_i + π_iJ𝐛)) with the notation of element-wise division “⊘” (also known as Hadamard division). That is, if we denote 𝐋= (𝐋_1, …, 𝐋_J-1)^T, 𝐑 = (𝐑_1, …, 𝐑_J-1)^T and 𝐛 = (b_1, …, b_J-1)^T, then the multinomial link model (<ref>) can be written in its equation formg_j(𝐋^T_j π_i/𝐑^T_j π_i + π_iJ b_j) = η_ij = 𝐟_j^T(𝐱_i)θ,j=1, …, J-1To simplify the notations, we defineρ_ij = 𝐋^T_j π_i/𝐑^T_j π_i + π_iJ b_j,j=1, …, J-1and thus ρ_i = (ρ_i1, …, ρ_i, J-1)^T = (𝐋π_i)⊘(𝐑π_i + π_iJ𝐛). Note that the notations of π_i, η_i, 𝐗_i are different from those in <cit.>. Special classes of the multinomial link models (<ref>) or (<ref>) with explicit 𝐋, 𝐑, and 𝐛 can be found in Sections <ref> and <ref> in the Supplementary Materials.§.§ Link functions and mixed-link modelsFor multinomial link models (<ref>) or (<ref>), we need that (i) the link functions g_1, …, g_J-1 are well defined from ρ∈ (0,1) to η∈ (-∞, ∞), which are part of the model assumptions. In this paper, we also require that (ii) g_1^-1, …, g_J-1^-1 exist and are differentiable from η∈ (-∞, ∞) to ρ∈ (0,1); and (iii) g_j^-1(η) > 0 for all η∈ (-∞, ∞) and j=1, …, J-1.In the literature, many link functions have been proposed. For examples, logit, probit, log-log, and complementary log-log links were used by <cit.> for binary responses; cauchit link could be tracked back to <cit.> for distributions with many extreme values; t link was suggested first by <cit.> and also connected to robit regression <cit.> and Gosset link family <cit.>; Pregibon link family <cit.> was introduced as a two-parameter generalization of the logit link. We skip Pregibon link in this paper since its image usually does not cover the whole real line. In Table <ref> we list possible link functions considered for multinomial link models. It should be noted that the t link family g(ρ) = F_ν^-1(ρ) incorporates logit, which could be approximated by F_7 according to <cit.>, and probit as a limit when ν goes to ∞. Note that F_ν and f_ν are the cumulative distribution function (cdf) and probability density function (pdf) of t-distribution with degrees of freedom ν, respectively. In this section, we introduce a special class of the multinomial link models (<ref>), which allows separate links for different categories.We will show later inSection <ref> that a multinomial model with mixed links can fit some real data much better. Mixed-link models with ppoInspired by the extended models (<ref>), (<ref>), (<ref>), we can extend models (<ref>), (<ref>), (<ref>), and (<ref>) by allowing mixed links to the following mixed-link models with ppo: g_j(ρ_ij) = 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζwhere i=1, …, m, j=1, …, J-1, g_1, …, g_J-1 are given link functions, andρ_ij = {[ π_ij/π_ij + π_iJ ;π_i1 + ⋯ + π_ij ;π_ij/π_ij + π_i,j+1 ; π_ij/π_ij + ⋯ + π_iJ].The mixed-link model (<ref>)+(<ref>) actually covers all models mentioned in Section <ref>. It can also be verified that it is a special class of the multinomial link models (<ref>) or (<ref>) (see Section <ref> of the Supplementary Materials).§.§ NA category and two-group models In practice, it is fairly common to encounter observations with NA or Unknown responses. If the missing mechanism is not at random, the analysis after removing those observations could be misleading (see, for example, <cit.>). Inspired by <cit.>, one may treat NA as a special category and use AIC or BIC to choose a working order of the response categories including NA. Nevertheless, for some real applications, one working order of all response categories may not fit the data well (seeSection <ref>).In this section, we introduce a special class of the multinomial link models (<ref>) or (<ref>), which allows the response categories consisting of two overlapped groups, called a two-group model. One group of k + 1 ≥ 2 categories are controlled by a baseline-category mixed-link model and the other group of J-k ≥ 3 categories are controlled by a cumulative, adjacent-categories, or continuation-ratio mixed-link model. The two groups are assumed to share a same category so that all categories are connected. A special case of two-group models is that the two groups share the same baseline category J (see Example <ref> in the Supplementary Materials). A general two-group model can be described as follows.Two-group models with ppoIn this model, we assume that the response categories consist of two groups. The first group {1, …, k, s} is controlled by a baseline-category mixed-link model with the baseline category s, where 1≤ k ≤ J-3 and k+1 ≤ s ≤ J, while the other group {k+1, …, J} is controlled by a cumulative, adjacent-categories, or continuation-ratio mixed-link model with J as the baseline category. The two groups share the category s to connect all the J categories.The two-group model with ppo is defined by equation (<ref>) plusρ_ij = {[π_ij/π_ij + π_is j=1, …, k; π_i,k+1 + ⋯ + π_ij/π_i,k+1 + ⋯ + π_iJ j=k+1, …, J-1; π_ij/π_ij+π_i,j+1 j=k+1, …, J-1;π_ij/π_ij + ⋯ + π_iJ j=k+1, …, J-1 ]. The two-group model (<ref>)+(<ref>) consists of three classes, baseline-cumulative, baseline-adjacent, and baseline-continuation mixed-linkmodels with ppo, which are all special cases of the multinomial link models (<ref>) or (<ref>) (see Section <ref> of the Supplementary Materials for justifications and explicit expressions of 𝐋, 𝐑, and 𝐛). Other structures, such as cumulative-continuation (two-group), three-group or multi-group models, can be defined similarly, which still belong to themultinomial link models. §.§ Partially equal coefficients and po-npo mixture models If we check the right hands of models (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), ppo <cit.> is currently the most flexible model structure, which allows that the parameters of some predictors are the same across different categories (the po component 𝐡_c^T(𝐱_i)ζ), while the parameters of some other predictors are different across the categories (the npo component 𝐡_j^T(𝐱_i)β_j). For some applications (see Section <ref>), nevertheless, it could be significantly better if we allow some (but not all) categories share the same regression coefficients of some particular predictors. For example, only the first and second categories sharing the same regression coefficients of x_i1 and x_i2, that is, follow a po model, while the third and fourth categories have their own coefficients of x_i1 and x_i2, that is, follow a npo model.The corresponding model matrix can be written as𝐗_i= [1x_i1 x_i2; 1 x_i1 x_i2;1 x_i1 x_i2; 1 x_i1 x_i2 ;]with the regression parameters θ=(β_11,β_21,β_31,β_32,β_33,β_41,β_42,β_43,ζ_1,ζ_2)^T. Such an 𝐗_i does not belong to ppo models.In this section, we introduce a special class of the multinomial link models (<ref>) or (<ref>), called po-npo mixture models, which allows the regression coefficients/parameters for a certain predictor to be partially equal, that is, equal across some, but not all, categories.Po-npo mixture modelWe assume model (<ref>) with the model matrix𝐗_i= [ 𝐡_1^T(𝐱_i)𝐡_c1^T(𝐱_i); ⋱ ⋮; 𝐡_J-1^T(𝐱_i) 𝐡_c,J-1^T(𝐱_i) ]∈ℝ^(J-1)× pwhere 𝐡_cj (𝐱_i) = (h_cj1(𝐱_i), …, h_cjp_c(𝐱_i))^T are known functions to determine the p_c predictors associated with the jth category. If we write θ = (β_1^T, …, β_J-1^T, ζ^T)^T, the po-npo mixture model can be written asg_j(ρ_ij) = 𝐡_j^T(𝐱_i)β_j+𝐡_cj^T(𝐱_i)ζwhere ρ_ij is given by (<ref>).A special case with 𝐡_c1(𝐱_i) =⋯ = 𝐡_c,J-1(𝐱_i) ≡𝐡_c (𝐱_i) leads to the classical ppo model (see (<ref>) in the Supplementary Materials). § PARAMETER, INFORMATION MATRIX, AND SELECTION§.§ Feasible parameter space It is known that the parameter estimates θ̂ found by R or SAS for cumulative logit models (<ref>) could be infeasible. That is, π_ij(θ̂)<0 or π_ij(θ̂) ∉ (0,1) for some i=1, …, m and j=1, …, J. For examples, <cit.> reported in their simulation studies that 44 out of 1000 fitted parameters by SAS PROC LOGISTIC command for cumulative logit models were not feasible.In this section, we provide explicit formulae for π_ij's as functions of parameters θ and 𝐱_i's under the multinomial link models (<ref>) or (<ref>), and characterize the space of feasible parameters for searching parameter estimates. Given the parameters θ∈ℝ^p and a setting 𝐱_i ∈ℝ^d, η_ij = 𝐟_j^T(𝐱_i) θ∈ (-∞, ∞) according to (<ref>), and ρ_ij = g_j^-1 (η_ij) ∈ (0,1)as defined in (<ref>), j=1, …, J-1. To generate multinomial responses under given link functions, we require π_ij∈ (0, 1), j=1, …, J.To solve π_i = (π_i1, …, π_i,J-1)^T and π_iJ from ρ_i = (ρ_ij, …, ρ_i,J-1)^T, we denote 𝐃_i =diag(ρ_i^-1) 𝐋 - 𝐑∈ℝ^(J-1)× (J-1), where diag(ρ_i^-1) =diag{ρ_i1^-1, …, ρ_i,J-1^-1}∈ℝ^(J-1)× (J-1). The explicit formulae are provided as follows. Suppose ρ_ij∈ (0, 1), j=1, …, J-1; 𝐃_i^-1 exists; and all the J-1 coordinates of 𝐃_i^-1𝐛 are positive. Then model (<ref>) implies a unique π_i as a function of ρ_i:π_i = 𝐃_i^-1𝐛/1 + 1^T_J-1𝐃_i^-1𝐛as well as π_iJ = (1 + 1^T_J-1𝐃_i^-1𝐛)^-1 such that π_ij∈ (0,1) for all j=1, …, J, where 1_J-1 is a vector consisting of J-1 ones.The proof of Lemma <ref>, as well as other proofs, is relegated to Section <ref> of the Supplementary Materials.According to Lemma <ref>, it is sufficient for π_ij∈ (0, 1) to let 𝐃_i^-1 exist and all the J-1 coordinates of 𝐃_i^-1𝐛 to be positive.Given the data with the observed set of distinct settings {𝐱_1, …, 𝐱_m}, we define the feasible parameter space of model (<ref>) or (<ref>) as Θ = {θ∈ℝ^p |𝐃_i^-1𝐃_i^-1𝐛, i=1, …, m }Note that Θ itself is either ℝ^p or an open subset of ℝ^p. Nevertheless, for typical applications, we may use a bounded subset of Θ, which is expected to cover the true θ as an interior point, as the working parameter space to achieve desired theoretical properties. Consider the mixed-link models (<ref>)+(<ref>) with the observed set of distinct settings {𝐱_1, …, 𝐱_m} and parameters θ = (β_1^T, …, β_J-1^T, ζ^T)^T ∈ℝ^p. (i) For baseline-category, adjacent-categories, and continuation-ratio mixed-link models, the feasible parameter space Θ = ℝ^p. (ii) For cumulative mixed-link models, Θ = {θ∈ℝ^p |ρ_i1 < ⋯ < ρ_i,J-1, i=1, …, m}, where ρ_ij = g_j^-1(𝐡_j^T(𝐱_i)β_j+𝐡_cj^T(𝐱_i)ζ). (iii) For cumulative mixed-link models, if g_1 = ⋯ = g_J-1 = g and g is strictly increasing, then Θ = {θ∈ℝ^p |𝐡_j^T(𝐱_i) β_j < 𝐡_j+1^T (𝐱_i) β_j+1, j=1, …, J-2, i=1, …, m}. Theorem <ref> shows that the feasible parameter space defined in (<ref>) for the multinomial link models (<ref>) or (<ref>) is consistent with the one for the classical cumulative link models (<ref>), which require all π_ij∈ (0,1). Note that if g is strictly decreasing in Theorem <ref>(iii), then Θ = {θ∈ℝ^p |𝐡_j^T(𝐱_i) β_j > 𝐡_j+1^T (𝐱_i) β_j+1, j=1, …, J-2, i=1, …, m}.For general multinomial link models especially with a cumulative component involved, for example, in a two-group model (<ref>)+(<ref>), one can always use (<ref>) to validate the feasibility of θ. §.§ Fisher information matrixThere are many reasons that we need to calculate the Fisher information matrix 𝐅(θ), for examples, when finding the maximum likelihood estimate (MLE) θ̂ of θ using the Fisher scoring method (see Section <ref>), constructing confidence intervals of θ (see Section <ref>), or finding optimal designs <cit.>. Inspired by Theorem 2.1 in <cit.> for multinomial logistic models (<ref>), in this section, we provide explicit formulae for calculating 𝐅(θ), θ∈Θ for general multinomial link models (<ref>) or (<ref>). Suppose for distinct 𝐱_i, i=1,⋯,m, we have independent multinomial responses 𝐘_i=(Y_i1,⋯,Y_iJ)^T ∼ Multinomial(n_i; π_i1,⋯,π_iJ), where n_i=∑_j=1^J Y_ij.Then the log-likelihood for the multinomial model isl(θ) =log∏_i=1^m n_i!/Y_i1!⋯Y_iJ!π_i1^Y_i1⋯π_iJ^Y_iJ =∑_i=1^m 𝐘_i^Tlogπ̅_i + ∑_i=1^m log (n_i!) - ∑_i=1^m∑_j=1^J log (Y_ij!)where π̅_i = (π_i1, …, π_iJ)^T = (π_i^T, π_iJ)^T, logπ̅_i=(logπ_i1, ⋯,logπ_iJ)^T. Using matrix differentiation formulae (see, for example,(2008, Chapter 17)), we obtain the score vector ∂ l/∂θ^T and the Fisher information matrix for general model (<ref>) as follows (please see Section <ref> of the Supplementary Materials for more details). Consider the multinomial link model (<ref>) with distinct settings 𝐱_1, …, 𝐱_m and independent response observations. Suppose θ∈Θ. Then the score vector∂ l/∂θ^T = ∑_i=1^m𝐘_i^T diag(π̅_i)^-1∂π̅_i/∂θ^Tsatisfying E(∂ l/∂θ^T)=0, and the Fisher information matrix𝐅=∑_i=1^mn_i𝐅_iwhere𝐅_i= (∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1∂π̅_i/∂θ^T ∂π̅_i/∂θ^T = 𝐄_i 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2) · diag((𝐠^-1)'(η_i)) ·𝐗_i 𝐄_i= [[ I_J-1; 0_J-1^T ]] - π̅_i 1_J-1^T∈ℝ^J× (J-1)diag((𝐠^-1)'(η_i)) =diag{(g_1^-1)'(η_i1), …, (g_J-1^-1)'(η_i, J-1)}∈ℝ^(J-1)× (J-1), I_J-1 is the identity matrix of order J-1, 0_J-1 is a vector of zeros in ℝ^J-1, and 1_J-1 is a vector of ones in ℝ^J-1.Theorem <ref> covers the conclusion of Theorem 2.1 in <cit.> as a special case.§.§ Positive definiteness of Fisher information matrix In this section, we explore when the Fisher information matrix 𝐅 is positive definite, which is critical not only for the existence of 𝐅^-1, but also for the existence of unbiased estimates ofa feasible parameter θ with finite variance <cit.> and relevant optimal design problems <cit.>.To investigate the rank of 𝐅, we denote a J× (J-1) matrix𝐂_i =(𝐜_i1, …, 𝐜_i,J-1) = 𝐄_i 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2) · diag((𝐠^-1)'(η_i))where 𝐜_i1, …, 𝐜_i,J-1∈ℝ^J are its column vectors. Then ∂π̅_i/∂θ^T = 𝐂_i 𝐗_i according to (<ref>). We further denote a (J-1)× (J-1) matrix 𝐔_i =(u_st(π_i))_s,t=1,…, J-1 = 𝐂_i^Tdiag (π̅_i)^-1𝐂_i whose (s,t)th entry u_st(π_i) = 𝐜_is^Tdiag (π̅_i)^-1𝐜_it . Then 𝐅_i = 𝐗_i^T 𝐔_i 𝐗_i according to Theorem <ref>. Suppose θ∈Θ. Then rank(𝐔_i) = J-1 and rank(𝐅_i) =rank(𝐗_i).We further define an m(J-1)× m(J-1) matrix 𝐔 = (𝐔_st)_s,t=1,…, J-1 with 𝐔_st=diag{n_1 u_st(π_1), …, n_m u_st(π_m)}. Recall that the model matrix for general multinomial link models (<ref>) is𝐗_i = (𝐟_1(𝐱_i), …, 𝐟_J-1 (𝐱_i))^T = (f_jl(𝐱_i))_j=1, …, J-1; l=1, …, pTo explore the positive definiteness of 𝐅, we define a p× m(J-1) matrix𝐇 = (𝐟_1(𝐱_1), …, 𝐟_1(𝐱_m), …, 𝐟_J-1(𝐱_1), …, 𝐟_J-1(𝐱_m)) =[[𝐅_11^T ⋯ 𝐅_J-1,1^T; ⋮ ⋯ ⋮;𝐅_1p^T ⋯ 𝐅_J-1,p^T ]]where 𝐅_jl = (f_jl(𝐱_1), …, f_jl(𝐱_m))^T ∈ℝ^m. General ppo modelFor ppo models including Examples <ref> and <ref>𝐇= ([ 𝐇_1; ⋱; 𝐇_J-1; 𝐇_c ⋯ 𝐇_c ]) ∈ℝ^p× m(J-1)where 𝐇_j= (𝐡_j(𝐱_1), ⋯, 𝐡_j(𝐱_m)) ∈ℝ^p_j × m, and 𝐇_c = (𝐡_c(𝐱_1), ⋯, 𝐡_c(𝐱_m)) ∈ℝ^p_c × m.Example <ref> (continued) For po-npo mixture models, 𝐇= ([ 𝐇_1; ⋱; 𝐇_J-1;𝐇_c1 ⋯ 𝐇_c,J-1 ]) ∈ℝ^p× m(J-1)where 𝐇_j= (𝐡_j(𝐱_1), …, 𝐡_j(𝐱_m)) ∈ℝ^p_j × m, and 𝐇_cj = (𝐡_cj(𝐱_1), …, 𝐡_cj(𝐱_m)) ∈ℝ^p_c × m. For the multinomial link model (<ref>) with n_i>0 independent observations at distinct setting 𝐱_i, i=1, …, m, its Fisher information matrix 𝐅 = 𝐇𝐔𝐇^T. Since (g_j^-1)'(η_ij) ≠ 0 for all i=1, …, m and j=1, …, J-1, then 𝐅 at a feasible parameter θ is positive definite if and only if 𝐇 is of full row rank. According to Theorem <ref>, the positive definiteness of 𝐅 at a feasible θ depends only on the predictor functions and the distinct settings 𝐱_1, …, 𝐱_m. From a design point of view, one needs to collect observations from a large enough set {𝐱_1, …, 𝐱_m} of distinct experiments settings. From a data analysis point of view, given the data with the set {𝐱_1, …, 𝐱_m} of distinct settings, there is a limit of model complexity beyond which not all parameters are estimable. §.§ Confidence intervals and hypothesis tests for θ In this paper, we use maximum likelihood for estimating θ. That is, we look for θ̂ which maximizes the likelihood function L(θ) or the log-likelihood function l(θ), known as the maximum likelihood estimate (MLE, see, for example, Section 1.3.1 in <cit.> for justifications on adopting MLE).Under regularity conditions (see, for example, Section 5f in <cit.> or Chapter 18 in <cit.>), asymptotically we have θ̂ is unbiased, normally distributed with Cov(θ̂) = 𝐅(θ)^-1. Denoting 𝐅(θ̂)^-1 = (σ̂_ij)_i,j=1,…, p, we construct approximate confidence intervals θ_i ∈ (θ̂_i - z_α/2√(σ̂_ii),θ̂_i + z_α/2√(σ̂_ii)) with θ = (θ_1, …, θ_p)^T and θ̂ = (θ̂_1, …, θ̂_p)^T, where z_α/2 is the upper (α/2)th quantile of N(0,1). To test H_0: θ = θ_0, we may use Wald statistic <cit.>, which asymptotically is W = (θ̂ - θ_0)^T 𝐅(θ̂) (θ̂ - θ_0) H_0∼χ^2_p .Suppose θ = (θ_0^T, θ_1^T)^T with θ_0 ∈ℝ^r and θ_1 ∈ℝ^p-r. To test H_0: θ_0 = 0, we may use the likelihood-ratio test <cit.>Λ = -2 logmax_θ_1 L(θ_1)/max_θ L(θ)H_0∼χ^2_rasymptotically, which can be used before removing more than one predictors simultaneously for variable selection purposes. §.§ Model selection Given data (𝐱_i, 𝐲_i), i=1,⋯,m, where 𝐲_i=(y_i1,⋯,y_iJ)^T satisfyingn_i=∑_j=1^J y_ij. Suppose the MLE θ̂∈ℝ^p has been obtained. Following Lemma <ref>, we obtain π̂_ij = π_ij(θ̂), i=1, …, m, j=1, …, J. Then the maximized log-likelihoodl(θ̂) = ∑_i=1^m log (n_i!) + ∑_i=1^m 𝐲_i^Tlogπ̂̅̂_i- ∑_i=1^m∑_j=1^J log (y_ij!)where logπ̂̅̂_i=(logπ̂_i1, ⋯,logπ̂_iJ)^T. We may use AIC or BIC to choose the most appropriate model (see, for example, <cit.>, for a good review). More specifically, AIC = -2/n· l(θ̂) + 2·p/nBIC =-2· l(θ̂) + (log n)· pwhere n = ∑_i=1^m n_i. Note that a smaller AIC or BIC value indicates a better model. § FORMULAE AND ALGORITHMSFor facilitate the readers, we provide a summary of notations for specifying a multinomial link model in Section <ref> of the Supplementary Materials. In this section, we provide detailed formulae and algorithms for finding the MLE of θ for a general multinomial link model (<ref>) or (<ref>), given a dataset in its summarized form {(𝐱_i, 𝐲_i) | i=1, …, m}, where 𝐱_i ∈ℝ^d, i=1, …, m are distinct settings, 𝐲_i = (y_i1, …, y_iJ)^T, i=1, …, m are vectors of nonnegative integers with ∑_j=1^J y_ij = n_i >0, i=1, …, m. We also provide a backward selection algorithm for finding the most appropriate po-npo mixture model.§.§ Fisher scoring method for estimating θFor numerically finding the MLE θ̂, we adopt the Fisher scoring method described, for example, in <cit.> or Chapter 14 in <cit.>. That is, if we have θ^(t) at the tth iteration, we obtainθ^(t+1) = θ^(t) + δ(𝐅(θ^(t)))^-1∂ l/∂θ^T(θ^(t))at the (t+1)th iteration, where δ∈ (0,1] is astep length which is chosen to let l(θ^(t+1)) > l(θ^(t)) and π_ij(θ^(t+1)) ∈ (0,1) for all i=1, …, m and j=1, …, J, 𝐅(θ^(t)) is the Fisher information matrix at θ = θ^(t), and ∂ l/∂θ^T(θ^(t)) is expression (<ref>) evaluated at θ = θ^(t). Theoretical justifications and more discussions on the Fisher scoring method could be found in <cit.>, <cit.>, and references therein.It can be verified that Algorithm <ref> is valid. First of all, according to, for example, Section 14.3 of <cit.>, l(θ^*) > l(θ^(t)) for large enough s or small enough δ^s if 𝐅(θ^(t)) is positive definite. Secondly, since Θ is open (see Section <ref>), θ^(t)∈Θ must be an interior point, then θ^* ∈Θ for large enough s.It should also be noted that it could be tricky to calculate 𝐅(θ^(t))^-1 numerically while keeping its positive definiteness, especially when some eigenvalues of 𝐅(θ^(t)) are tiny, which should be continuously monitored.§.§ Finding initial parameter valuesStep 1^∘ of Algorithm <ref> is critical and nontrivial for multinomial link models when a cumulative component involved. In this section we first provide Algorithm <ref> for finding a possible initial estimate θ^(0) of θ, which is expected not far away from the MLE θ̂.One needs to use (<ref>) to check whether the θ^(0) obtained by Algorithm <ref> is feasible. If not, we provide Algorithm <ref> to pull θ^(0) back to the feasible domain Θ.Essentially, Algorithm <ref> finds an initial estimate θ^(0) of θ, which approximately leads to π̂_ij = (y_ij+1)/(n_i+J). Such a θ^(0) is computational convenient but may not be feasible.For typically applications, model (<ref>) has an intercept for each j, that is, f_jl_j(𝐱_i)≡ 1 for some l_j ∈{1, …, p}, which indicates θ_l_j to be the intercept of the jth category. Note that l_1, …, l_J-1 are typically distinct (otherwise, two categories share the same intercept). In that case, we recommend the following algorithm to find a feasible initial estimate of θ. For some circumstances, model (<ref>) may not have intercept for some or all categories, for example, for j∈ I_0 with ∅⊂ I_0 ⊆{1, …, J-1}. In that case, we define η_j^(0)=0, j∈ I_0 and thus ρ_j^(0) = g_j^-1(0), j∈ I_0. More specifically, we use the following four steps to replace Step 1^∘ of Algorithm <ref> tofind consistent π_j^(0)'s.1a^∘ Calculateπ_j^(00) = ∑_l=1^m Y_lj + m/n + mJ∈ (0,1), j=1, …, J, and let π^(00) = (π_1^(00), …, π_J-1^(00))^T; 1b^∘ Calculate ρ_j^(00) = 𝐋_j^Tπ^(00)/𝐑_j^T π^(00) + π_J^(00) b_j, j=1, …, J-1; 1c^∘ Define ρ_j^(0) = g_j^-1(0) for j∈ I_0, and ρ_j^(0) = ρ_j^(00) for j ∉ I_0; 1d^∘ Calculate π_j^(0), j=1, …, J from ρ_1^(0), …, ρ_J-1^(0) based on Lemma <ref> given that 𝐃_i^-1 exists and all coordinates of 𝐃_i^-1𝐛 are positive for each i=1, …, m.Based on our experience, the provided algorithms in this section work well for all examples that we explore for this paper. §.§ Calculating gradient and Fisher information matrix at θ In this section, we provide detailed formulae for Step 2^∘ of Algorithm <ref>.§.§ Most appropriate po-npo mixture model Inspired by the backward selection strategy for selecting a subset of covariates <cit.>, in this section we provide a backward selection algorithm for finding the most appropriate po-npo mixture model (Example <ref>) for a given dataset. It aims to identify a good (if not the best) po-npo mixture model by iteratively merging the closest pair of parameters according to AIC.§ APPLICATIONS In this section, we use real data examples to show that the proposed multinomial link models can be significantly better than existing models and may correct misleading conclusions due to missing data.§.§ Metabolic syndrome dataset with NA responses In this section, we use a metabolic syndrome dataset discussed by <cit.> to illustrate that the proposed two-group models (see Example <ref>) can be used for analyzing data with missing categorical responses.For this metabolic syndrome dataset, the goal is to explore the association between FBS (fasting blood sugar) and the covariates, hpt (hypertension status, yes or no), cholesterol (total cholesterol, rounded to 0,1,…,23 in mmol/L), and weight (body weight, rounded to 30, 40,…,190 in kilogram). In <cit.>, the response FBS was treated as a categorical variable with categories Normal (less than 6.1 mmol/L), IFG (Impaired Fasting Glucose, between 6.1 mmol/L to 6.9 mmol/L), DM (Diabetis Mellitus, 7.00 mmol/L or higher), as well as 251 NA's among the 4,282 observations. Having removed the observations with NA responses, a main-effects baseline-category logit model (<ref>) with npo was used in <cit.> as an illustration. Without the NA category, we adopt by AIC a main-effects continuation-ratio logit model (<ref>) with npo and its natural order {Normal, IFG, DM}, called the Model without NA for this dataset. To check whether the conclusions are consistent with or without the NA category, we first follow <cit.> and use AIC to choose the most appropriate order for the four categories including NA, called a working order. The best model chosen by AIC is a main-effects continuation-ratio npo model with the working order {Normal, IFG, DM, NA}, whose AIC value is 930.40 with the cross-entropy loss 3539.40 based on a five-fold cross-validation <cit.>.For illustration purposes, we then explore the two-group models with npoand logit link (that is, g_1 = g_2 = g_3 =logit). The best two-group model with npo and logit link that we find is to assume a baseline-category sub-model (<ref>) on one group {DM, IFG} and a continuation-ratio sub-model (<ref>) to the other group {Normal, IFG, NA}, which has AIC value 927.56 and cross-entropy loss 3537.97. According to <cit.>, the chosen two-group model is significantly better than <cit.>'s model with a single group {Normal, IFG, DM, NA}. We call the selected two-group model the Model with NA for this dataset. Figure <ref> shows how logπ̂_ij changes against weight based on the fitted Model with or without the NA category. When weight increases, the probability of Normal or IFG category changes with a similarly pattern with or without NA. However, the patterns of DM category are quite different. If we remove the NA category, the conclusion would be that the risk of DM increases significantly along with weight; while with NA category included the risk of DM is fairly flat and seems not so relevant to weight. Similar story occurs for the risk of DM against cholesterol with or without NA (see Figure <ref>). In other words, if we remove all observations with NA responses, we may conclude that both cholesterol and weight heavily affect the risk of DM; while with the complete data, the effects of cholesterol and weight are actually not that important. One possible explanation is that according to the Log Probability against weight with NA (see Figure <ref>, left panel), the chance of NA significantly decreases along with weight. In other words, the responses were not missing at random.§.§ Trauma clinical trial with mixed-link models In this section, we use a trauma clinical data discussed by <cit.> to show that the proposed mixed-link models (see Example <ref>) can be significantly better than the traditional logit models.<cit.> studied a trauma clinical trial with n=802 of trauma patients with five ordered response categories,Death, Vegetative state, Major disability, Minor disability, and Good recovery, known as the Glasgow Outcome Scale (GOS) in the literature of critical care <cit.>. An extended dataset (Table V in <cit.>) consists of 802 observations with two covariates, trauma severity (x_1=0,1) and dose level (x_2=1,2,3,4). In the literature, a main-effects cumulative logit model (<ref>) with po was applied to this dataset <cit.>, where the logit link was assumed for all categories. In this study, we allow separate links for different categories and use a main-effects mixed-link model with po. For illustration purposes, we use logit, probit, loglog, and cloglog as our candidate set of link functions. The best model that we find for this dataset applies log-log, probit, log-log, and logit links to j=1,2,3,4, respectively. This multi-link model achieves BIC 198.43, while the original logit model has a BIC value 252.74. To further show that the improvement is significant, we use five-fold cross-validations with cross-entropy loss <cit.> and 200 randomly generated partitions. As showed in Figure <ref>, our multi-link model is significantly better than the original logit model in terms of prediction accuracy. §.§ Police data with po-npo mixture modelIn this section, we use a police data discussed by <cit.> to show that the po-npo mixture model (see Example <ref>) can be significantly better than the traditional ppo models. The police data <cit.> consists of summarized information of n=12,483 suspects' Armed status (gun, other, or unarmed), Gender (0 or 1), Flee (0 or 1), Mental illness (0 or 1), as well as the responses of police with four categories, Tasered, Shot, Shot & Tasered, and Other, which do not have a natural ordering. According to <cit.>, a continuation-ratio logit model (<ref>) with npo and the working order {Tasered, Shot, Other, Shot & Tasered} chosen by AIC is significantly better than the baseline-category models (<ref>) or multiclass logistic model. The best model reported in <cit.> has the AIC value 192.01.To find the most appropriate po-npo mixture model, we run Algorithm <ref> with iterations t=1, …, 6. The corresponding AIC values after the six iterations are 190.36, 188.47, 187.96, 187.04, 185.04, 186.07, respectively. Since the 6th iteration leads to an increased AIC value, then we report the fitted po-npo mixture model right after the 5th iteration. The fitted parameters are listed in Table <ref> with equal parameters per column in bold.Compared with the AIC value 192.01 of <cit.>'s npo model, the AIC value 185.04 of the reported po-npo mixture model is significantly better according to <cit.>.§.§ House flies experiment with predictor selection In this section, we use a house flies data discussed by <cit.> to show that the confidence intervals and hypothesis tests described in Section <ref> can be used for predictor or variable selection.Reported by <cit.>, the emergence of house flies data consists of summarized responses from n=3,500 pupae under a radiation experiment with the only covariate x_i, Dose of radiation.There are J=3 possible response categories, Unopened, Opened but died (before completing emergence), and Completed emergence, which have a nested or hierarchical structure.<cit.> proposeda continuation-ratio logit model for the emergence of house flies data as follows (see also <cit.>):log(π_i1/π_i2 + π_i3) = β_11 + β_12 x_i + β_13 x_i^2, log(π_i2/π_i3) = β_21 + β_22 x_iwhere x_i ∈ [80, 200] is the radiation level in units Gy. By utilizing Algorithm <ref>, we obtain the fitted parameters β̂ = (β̂_11, β̂_12, β̂_13, β̂_21, β̂_22)^T = (-1.935, -0.02642, 0.0003174, -9.159, 0.06386)^T, which is consistent with the values reported in <cit.>.As described in Section <ref>, we further compute the Fisher information matrix 𝐅(θ̂), the confidence intervals of θ, and find that only the 95% confidence interval (-0.0541, 0.0013) for β_12 contains 0, which implies a reduced continuation-ratio model as follows:log(π_i1/π_i2 + π_i3) = β_11 + β_13 x_i^2, log(π_i2/π_i3) = β_21 + β_22 x_iwhich is significantly better than Model (<ref>). Actually, in terms of BIC values (see Section <ref>), Model (<ref>)'s 108.17 is also better than Model (<ref>)'s 112.91.§ CONCLUSIONThe proposed multinomial link model is much more flexible than existing multinomial models. It allows separate link functions for different categories (see Example <ref>) and more flexible model structures (see Example <ref>) than ppo models. More importantly, by arranging the response categories into two or more groups and finding working orders for each group, the proposed model can incorporate the observations with missing categorical responses into data analysis, which is often not missed at random. As shown in Section <ref>, the proposed two-group model can not only predict the probability or risk of a regular category, but also check if the NA category is due to randomness. In the example of metabolic syndrome data, it clearly shows that the chance of NA category decreases as weight increases, which is not at random.The algorithms for finding the MLE and the formulae on computing the gradient and information matrix are developed for the unified form of (<ref>) or (<ref>), and are therefore applicable for all multinomial link models, including the mixed link models (Example <ref>), two-group models (Example <ref>), po-npo mixture models (Example <ref>), and others.The proposed algorithms, especially Algorithms <ref> and <ref>, solve the infeasibility issue of cumulative link models in existing statistics software. We also provide easy-to-use conditions (<ref>) for general multinomial link models, which covers the classical cumulative link models as special cases. SUPPLEMENTARY MATERIALS.1 More on mixed-link models: Technical details that make mixed-link models a special class of multinomial link models;S.2 More on two-group models:Technical details that make two-group models a special class of multinomial link models;S.3 More on Fisher information matrix:Technical details on deriving Fisher information matrix;S.4 Summary of notations for multinomial link models:Summary of notations for specifying a multinomial link model;S.5 Proofs: Proofs for lemmas and theorems.AckowledgementThe authors gratefully acknowledge the support from the U.S. NSF grant DMS-1924859. 42 natexlab#1#1[Agresti(2010)]agresti2010 Agresti, A., 2010: Analysis of Ordinal Categorical Data. Wiley, 2nd ed.[Agresti(2013)]agresti2013 —, 2013: Categorical Data Analysis. Wiley, 3rd ed.[Agresti(2018)]agresti2018introduction —, 2018: An Introduction to Categorical Data Analysis. John Wiley & Sons, 3rd ed.[Aitchison and Bennett(1970)]aitchison1970 Aitchison, J. and J. Bennett, 1970: Polychotomous quantal response by maximum indicant. Biometrika, 57(2), 253–262.[Albert and Chib(1993)]albert1993bayesian Albert, J. and S. Chib, 1993: Bayesian analysis of binary and polychotomous response data. Journal of the American statistical Association, 88(422), 669–679.[Atkinson et al.(2007)Atkinson, Donev, and Tobias]atkinson2007 Atkinson, A., A. Donev, and R. Tobias, 2007: Optimum Experimental Designs, with SAS. Oxford University Press.[Bland(2015)]bland2015introduction Bland, M., 2015: An Introduction to Medical Statistics. Oxford university press, 4th ed.[Bu et al.(2020)Bu, Majumdar, and Yang]bu2020 Bu, X., D. Majumdar, and J. Yang, 2020: D-optimal designs for multinomial logistic models. Annals of Statistics, 48(2), 983–1000.[Burnham and Anderson(2004)]burnham2004aic Burnham, K. P. and D. R. Anderson, 2004: Understanding aic and bic in model selection. Sociological Methods & Research, 33(2), 261–304.[Chuang-Stein and Agresti(1997)]chuang1997 Chuang-Stein, C. and A. Agresti, 1997: Tutorial in biostatistics-a review of tests for detecting a monotone dose-response relationship with ordinal response data. Statistics in Medicine, 16, 2599–2618.[Dobson and Barnett(2018)]dobson2018 Dobson, A. and A. Barnett, 2018: An Introduction to Generalized Linear Models. Chapman & Hall/CRC, 4th ed.[Dousti Mousavi et al.(2023)Dousti Mousavi, Yang, and Aldirawi]dousti2023variable Dousti Mousavi, N., J. Yang, and H. Aldirawi, 2023: Variable selection for sparse data with applications to vaginal microbiome and gene expression data. Genes, 14(2), 403.[Ferguson(1996)]ferguson1996course Ferguson, T., 1996: A Course in Large Sample Theory. Chapman & Hall.[Glonek and McCullagh(1995)]pmcc1995 Glonek, G. and P. McCullagh, 1995: Multivariate logistic models. Journal of the Royal Statistical Society, Series B, 57, 533–546.[Golub and Loan(2013)]golub2013 Golub, G. and C. V. Loan, 2013: Matrix Computations. Johns Hopkins University Press, 4th ed.[Greene(2018)]greene2018econometric Greene, W., 2018: Econometric Analysis. Pearson Education.[Hastie et al.(2009)Hastie, Tibshirani, and Friedman]hastie2009elements Hastie, T., R. Tibshirani, and J. Friedman, 2009: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2nd ed.[Hirotsugu(1973)]hirotsugu1973information Hirotsugu, A., 1973: Information theory and an extension of the maximum likelihood principle. In 2nd International Symposium on Information Theory.[Itepan(1995)]itepan1995 Itepan, N., 1995: Aumento do periodo de aceitabilidade de pupas de Musca domestica L., 1758 (Diptera: Muscidae), irradiadas com raios gama, como hospedeiras de parasitoides (Hymenoptera: Pteromalidae). Master's thesis, Centro de Energia Nuclear na Agricultura/USP, Piracicaba, SP, Brazil.[Jennett and Bond(1975)]jennett1975 Jennett, B. and M. Bond, 1975: Assessment of outcome after severe brain damage. Lancet, 305, 480–484.[Koenker and Yoon(2009)]koenker2009parametric Koenker, R. and J. Yoon, 2009: Parametric links for binary choice models: A fisherian-bayesian colloquy. Journal of Econometrics, 152(2), 120–130.[Lall et al.(2002)Lall, Campbell, Walters, and Morgan]lall2002 Lall, R., M. Campbell, S. Walters, and K. Morgan, 2002: A review of ordinal regression models applied on health-related quality of life assessments. Statistical Methods in Medical Research, 11, 49–67.[Lange(2010)]lange2010numerical Lange, K., 2010: Numerical Analysis for Statisticians. Springer, 2nd ed.[Liu(2004)]liu2004robit Liu, C., 2004: Robit regression: a simple robust alternative to logistic and probit regression. Applied Bayesian Modeling and Casual Inference from Incomplete-Data Perspectives, 227–238.[McCullagh(1980)]pmcc1980 McCullagh, P., 1980: Regression models for ordinal data. Journal of the Royal Statistical Society, Series B, 42, 109–142.[McCullagh and Nelder(1989)]pmcc1989 McCullagh, P. and J. Nelder, 1989: Generalized Linear Models. Chapman and Hall/CRC, 2nd ed.[Morgan and Smith(1992)]morgan1992note Morgan, B. and D. Smith, 1992: A note on wadley's problem with overdispersion. Journal of the Royal Statistical Society: Series C (Applied Statistics), 41(2), 349–354.[Musa et al.(2023)Musa, Mansor, and Hanis]musa2023data Musa, K., W. Mansor, and T. Hanis, 2023: Data Analysis in Medicine and Health using R. Chapman and Hall/CRC.[O'Connell(2006)]oconnell2006 O'Connell, A., 2006: Logistic Regression Models for Ordinal Response Variables. Sage.[Osborne(1992)]osborne1992fisher Osborne, M. R., 1992: Fisher's method of scoring. International Statistical Review, 99–117.[Pregibon(1980)]pregibon1980goodness Pregibon, D., 1980: Goodness of link tests for generalized linear models. Journal of the Royal Statistical Society: Series C (Applied Statistics), 29(1), 15–24.[Rao(1973)]rao1973linear Rao, C., 1973: Linear Statistical Inference and Its Applications. John Wiley & Sons.[Schervish(1995)]schervish1995 Schervish, M., 1995: Theory of Statistics. Springer.[Seber(2008)]seber2008 Seber, G., 2008: A Matrix Handbook for Statisticians. Wiley.[Smith et al.(2020)Smith, Walker, and McKenna]smith2020exploration Smith, T., D. Walker, and C. McKenna, 2020: An exploration of link functions used in ordinal regression. Journal of Modern Applied Statistical Methods, 18(1), 20.[Stoica and Marzetta(2001)]stoica2001 Stoica, P. and T. Marzetta, 2001: Parameter estimation problems with singular information matrices. IEEE Transactions on Signal Processing, 49, 87–90.[Wald(1943)]wald1943tests Wald, A., 1943: Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society, 54(3), 426–482.[Wang and Yang(2023)]wang2023identifying Wang, T. and J. Yang, 2023: Identifying the most appropriate order for categorical responses. Statistica Sinica, to appear, available via <https://www3.stat.sinica.edu.tw/ss_newpaper/SS-2022-0322_na.pdf>.[Wilks(1935)]wilks1935likelihood Wilks, S., 1935: The likelihood test of independence in contingency tables. The Annals of Mathematical Statistics, 6(4), 190–196.[Wilks(1938)]wilks1938large —, 1938: The large-sample distribution of the likelihood ratio for testing composite hypotheses. The Annals of Mathematical Statistics, 9(1), 60–62.[Yang et al.(2017)Yang, Tong, and Mandal]ytm2016 Yang, J., L. Tong, and A. Mandal, 2017: D-optimal designs with ordered categorical data. Statistica Sinica, 27, 1879–1902.[Zocchi and Atkinson(1999)]atkinson1999 Zocchi, S. and A. Atkinson, 1999: Optimum experimental designs for multinomial logistic models. Biometrics, 55, 437–444. Spage10.9514pt plus.8pt minus .6ptMultinomial Link Models Tianmeng Wang^1, Liping Tong^2, and Jie Yang^1^1University of Illinois at Chicago and ^2Advocate Aurora HealthSupplementary Materials 911.5pt plus.8pt minus .6ptS.1 More on mixed-link models: Technical details that make mixed-link models a special class of multinomial link models;S.2 More on two-group models: Technical details that make two-group models a special class of multinomial link models;S.3 More on Fisher information matrix: Technical details on deriving Fisher information matrix;S.4 Summary of notations for multinomial link models: Summary of notations for specifying a multinomial link model; S.5 Proofs: Proofs for lemmas and theorems.S.sectionS.equation S.table S.figure1214pt plus.8pt minus .6pt§ MORE ON MIXED-LINK MODELS The mixed-link models (<ref>)+(<ref>) introduced in Example <ref> include four classes of models, baseline-category mixed-link models, cumulative mixed-link models, adjacent-categories mixed-link models, and continuation-ratio mixed-link models.In this section, we show the technical details that make the mixed-link models (<ref>)+(<ref>) a special class of the multinomial link models (<ref>) or (<ref>). By letting the model matrix 𝐗_i in (<ref>) or (<ref>) take the following specific form𝐗_i= [ 𝐡_1^T(𝐱_i) 𝐡_c^T(𝐱_i); ⋱ ⋮; 𝐡_J-1^T(𝐱_i) 𝐡_c^T(𝐱_i) ]∈ℝ^(J-1) × pwith the regression parameter vector θ=(β_1^T,⋯,β_J-1^T,ζ^T)^T consists of p=p_1+⋯+p_J-1+p_c unknown parameters in total,model (<ref>) with ppo can be written asg_j(𝐋^T_j π_i/𝐑^T_j π_i + π_iJ b_j) = η_ij = 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ,j=1, …, J-1In the rest of this section, we specify the (J-1)× (J-1) matrices 𝐋, 𝐑 and the vector 𝐛 in model (<ref>) (or equivalently the vectors 𝐋_j, 𝐑_j, and the numbers b_j in model (<ref>)) for each of the four classes of mixed-link models. For facilitate the readers (see Step 3^∘ of Algorithm <ref>), we also provide the explicit formulae for 𝐃_i^-1 and 𝐃_i^-1𝐛, which are critical for computing the Fisher information matrix and the fitted categorical probabilities, where 𝐃_i =diag(ρ_i^-1) 𝐋 - 𝐑. §.§ Baseline-category mixed-link modelsIn this case, 𝐋 = 𝐑 = I_J-1, the identity matrix of order J-1, and 𝐛 = 1_J-1, the vector of all ones with length J-1. A special case is g_1 = ⋯ = g_J-1 = g. Then 𝐃_i^-1 =diag(ρ_i/(1-ρ_i)) =diag{ρ_i1/1-ρ_i1, …, ρ_i,J-1/1-ρ_i,J-1} 𝐃_i^-1𝐛 = ρ_i/(1-ρ_i) = (ρ_i1/1-ρ_i1, …, ρ_i,J-1/1-ρ_i,J-1)^T A special case is when J=2, ρ_i = π_i = π_i1∈ℝ. §.§ Cumulative mixed-link models In this case, 𝐋 = [[ 1; 1 1; ⋮ ⋮ ⋱; 1 1 ⋯ 1 ]] ∈ℝ^(J-1)× (J-1)𝐑 = 1_J-11_J-1^T, 𝐛 = 1_J-1.A special case is g_1 = g_2 = ⋯ = g_J-1 = g. Then 𝐃_i^-1 = [[ ρ_i100⋯0ρ_i,J-1ρ_i1/1-ρ_i,J-1;-ρ_i1 ρ_i20⋯0 ρ_i,J-1(ρ_i2 - ρ_i1)/1-ρ_i,J-1;0-ρ_i2 ρ_i3⋯0 ρ_i,J-1(ρ_i3 - ρ_i2)/1-ρ_i,J-1;⋮⋮⋮⋱⋮⋮;000⋯ρ_i,J-2 ρ_i,J-1(ρ_i,J-2 - ρ_i,J-3)/1-ρ_i,J-1;000⋯ -ρ_i,J-2 ρ_i,J-1(1 - ρ_i,J-2)/1-ρ_i,J-1 ]] ∈ℝ^(J-1)× (J-1)exists, 𝐃_i^-1𝐛 = (1-ρ_i,J-1)^-1 (ρ_i1, ρ_i2 - ρ_i1, …, ρ_i,J-2 - ρ_i,J-3, ρ_i,J-1 - ρ_i,J-2)^T, and 1_J-1^T 𝐃_i^-1𝐛 = ρ_i,J-1/(1-ρ_i,J-1).One special case is when J=2, ρ_i = π_i = π_i1∈ℝ.Another special case is when J=3, 𝐃_i^-1 = [[ρ_i1 ρ_i2ρ_i1/1-ρ_i2; -ρ_i1 ρ_i2(1 - ρ_i1)/1-ρ_i2 ]] ∈ℝ^2× 2exists, 𝐃_i^-1𝐛 = (1-ρ_i2)^-1 (ρ_i1, ρ_i2 - ρ_i1)^T.§.§ Adjacent-categories mixed-link models In this case, 𝐋 = I_J-1, 𝐑 = [[ 1 1; 1 1; 1 ⋱; ⋱ 1; 1 ]] ∈ℝ^(J-1)× (J-1), 𝐛 = [[ 0; 0; ⋮; 0; 1 ]] ∈ℝ^J-1A special case is g_1 = ⋯ = g_J-1 = g.Then 𝐃_i^-1 = (a_st)_s,t=1,…, J-1 exists with a_st = {[ ∏_l=s^t ρ_il/1-ρ_il s ≤ t; 0 s > t ].All elements of 𝐃_i^-1𝐛 = (∏_l=1^J-1ρ_il/1-ρ_il,∏_l=2^J-1ρ_il/1-ρ_il,…,∏_l=J-1^J-1ρ_il/1-ρ_il)^T ∈ℝ^J-1are positive.One special case is when J=2, 𝐋 = 𝐑 = 𝐛 = 1 and then ρ_i = π_i = π_i1∈ℝ. For adjacent-categories logit models, the vglm function in the R package VGAM calculates log(π_i,j+1/π_ij) instead of log(π_ij/π_i,j+1). This will lead to θ discussed in our paper is different from θ^vglm calculated from vglm function, but π̂_i and log-likelihood will be the same for both θ and θ^vglm. §.§ Continuation-ratio mixed-link models In this case, 𝐋 = I_J-1, 𝐑 = [[ 1 1 ⋯ 1; 1 ⋯ 1; ⋱ ⋮; 1 ]] ∈ℝ^(J-1)× (J-1)𝐛 = 1_J-1. A special case is g_1 = ⋯ = g_J-1 = g.Then 𝐃_i^-1 = (a_st)_s,t=1,…, J-1 exists with a_st = {[ ρ_isρ_it∏_l=s^t (1-ρ_il)^-1 s < t;ρ_is (1-ρ_is)^-1 s = t; 0 s > t ]. All elements of 𝐃_i^-1𝐛 = (ρ_i1/∏_l=1^J-1 (1-ρ_il),ρ_i2/∏_l=2^J-1 (1-ρ_il),…,ρ_i,J-1/∏_l=J-1^J-1 (1-ρ_il))^T ∈ℝ^J-1are positive. It can be verified that 1_J-1^T 𝐃_i^-1𝐛 = ∏_l=1^J-1 (1-ρ_il)^-1 - 1.One special case is when J=2, ρ_i = π_i = π_i1∈ℝ.§ MORE ON TWO-GROUP MODELSIn this section, we show the technical details that make the two-group models (<ref>)+(<ref>) introduced in Example <ref> a special class of the multinomial link models (<ref>) or (<ref>).Similar as in Section <ref> for Example <ref>, the model matrix 𝐗_i in (<ref>) or (<ref>)take the form of (<ref>); and the regression parameter vector θ=(β_1^T,⋯,β_J-1^T,ζ^T)^T consists of p=p_1+⋯+p_J-1+p_c unknown parameters in total. Thenmodel (<ref>) can be written as (<ref>). In the rest of this section, we specify the (J-1)× (J-1) matrices 𝐋, 𝐑 and the vector 𝐛 in model (<ref>) (or equivalently the vectors 𝐋_j, 𝐑_j, and the numbers b_j in model (<ref>)) for each of the three classes of two-group models. Similarly as in Section <ref> for Example <ref>, we also provide the explicit formulae for 𝐃_i^-1 and 𝐃_i^-1𝐛.First, we focus on a special class of two-group models whose two groups share the same baseline category J. That is, s=J in this case, which leads to simplified notations. Two-group models with shared baseline categoryUnder (<ref>), the same form as in the mixed-link models (Example <ref>), we further assume that there exists an integer k, such that, 1≤ k ≤ J-3 andρ_ij = {[π_ij/π_ij + π_iJ j=1, …, k; π_i,k+1 + ⋯ + π_ij/π_i,k+1 + ⋯ + π_iJ j=k+1, …, J-1; π_ij/π_ij + π_i,j+1 j=k+1, …, J-1; π_ij/π_ij +⋯ + π_iJ j=k+1, …, J-1 ].It indicates that the response categories form two groups, {1, …, k, J} and {k+1, …, J-1, J}, which share the same baseline category J.The two-group models (<ref>)+(<ref>) also consist of three classes, baseline-cumulative, baseline-adjacent, and baseline-continuation mixed-link models with shared baseline category, which are all special cases of the multinomial link models (<ref>) or (<ref>) (see Sections <ref>, <ref>, and <ref>). §.§ Baseline-cumulative mixed-link models with shared baseline category There are two groups of response categories in this model. One group of k + 1 ≥ 2 categories are controlled by a baseline-category mixed-link model and the other group of J-k ≥ 3 categories are controlled by a cumulative mixed-link model. The two groups share the same baseline category J. More specifically, let 1≤ k ≤ J-3 andρ_ij = {[π_ij/π_ij + π_iJ j=1, …, k; π_i,k+1 + ⋯ + π_ij/π_i,k+1 + ⋯ + π_iJ j=k+1, …, J-1 ].As for link functions, a special case is g_1 = ⋯ = g_k = g_a and g_k+1 = ⋯ = g_J-1=g_b . Then𝐋 = [[ I_k; 1; 1 1; ⋮ ⋮ ⋱; 1 1 ⋯ 1 ]], 𝐑 = [[ I_k; 1 1 ⋯ 1; 1 1 ⋯ 1; ⋮ ⋮ ⋱ ⋮; 1 1 ⋯ 1 ]]𝐛 = 1_J-1. Then 𝐃_i^-1 = [[ρ_i1/1-ρ_i1; ⋱ ;ρ_ik/1-ρ_ik; ρ_i,k+100⋯0 ρ_i,J-1ρ_i,k+1/1-ρ_i,J-1;-ρ_i,k+1ρ_i,k+20⋯0 ρ_i,J-1(ρ_i,k+2 - ρ_i,k+1)/1-ρ_i,J-1; 0 -ρ_i,k+2ρ_i,k+3⋯0 ρ_i,J-1(ρ_i,k+3 - ρ_i,k+2)/1-ρ_i,J-1; ⋮⋮⋮⋱⋮⋮; 000⋯ρ_i,J-2 ρ_i,J-1(ρ_i,J-2 - ρ_i,J-3)/1-ρ_i,J-1; 000⋯ -ρ_i,J-2 ρ_i,J-1(1 - ρ_i,J-2)/1-ρ_i,J-1 ]] 𝐃_i^-1𝐛 = (ρ_i1/1-ρ_i1, …,ρ_ik/1-ρ_ik, ρ_i,k+1/1-ρ_i,J-1, ρ_i,k+2-ρ_i,k+1/1-ρ_i,J-1, …, ρ_i,J-1 - ρ_i,J-2/1-ρ_i,J-1)^T 1_J-1^T 𝐃_i^-1𝐛 = ∑_l=1^k ρ_il/1-ρ_il + ρ_i,J-1/1-ρ_i,J-1One special case is when J=4 and k=1, 𝐃_i^-1 = [[ ρ_i1/1-ρ_i1;ρ_i2 ρ_i3ρ_i2/1-ρ_i3; -ρ_i2 ρ_i3(1 - ρ_i2)/1-ρ_i3 ]]§.§ Baseline-adjacent mixed-link models with shared baseline category There are two groups of response categories in this model. One group of k + 1 ≥ 2 categories are controlled by a baseline-category mixed-link model and the other group of J-k ≥ 3 categories are controlled by an adjacent-categories mixed-link model. The two groups share the same baseline category J. More specifically, let 1≤ k ≤ J-3 andρ_ij = {[π_ij/π_ij + π_iJ j=1, …, k; π_ij/π_ij + π_i,j+1 j=k+1, …, J-1 ].As for link functions, a special case is g_1 = ⋯ = g_k = g_a and g_k+1 = ⋯ = g_J-1=g_b . Then 𝐋 = I_J-1, 𝐑 = [[ I_k; 1 1; 1 1; 1 ⋱; ⋱ 1; 1 ]] ∈ℝ^(J-1)× (J-1), 𝐛 = [[ 1_k; 0; ⋮; 0; 1 ]] ∈ℝ^J-1Then 𝐃_i^-1 =[[ρ_i1/1-ρ_i1 ; ⋱;ρ_ik/1-ρ_ik ;∏_l=k+1^k+1ρ_il/1-ρ_il ∏_l=k+1^k+2ρ_il/1-ρ_il⋯ ∏_l=k+1^J-2ρ_il/1-ρ_il ∏_l=k+1^J-1ρ_il/1-ρ_il; ∏_l=k+2^k+2ρ_il/1-ρ_il⋯ ∏_l=k+2^J-2ρ_il/1-ρ_il ∏_l=k+2^J-1ρ_il/1-ρ_il; ⋱⋮⋮; ∏_l=J-2^J-2ρ_il/1-ρ_il ∏_l=J-2^J-1ρ_il/1-ρ_il;∏_l=J-1^J-1ρ_il/1-ρ_il ]]All elements of 𝐃_i^-1𝐛 = ( ρ_i1/1-ρ_i1,⋯, ρ_ik/1-ρ_ik, ∏_l=k+1^J-1ρ_il/1-ρ_il,…,∏_l=J-1^J-1ρ_il/1-ρ_il)^T ∈ℝ^J-1are positive. §.§ Baseline-continuation mixed-link models with shared baseline category There are two groups of response categories in this model. One group of k + 1 ≥ 2 categories are controlled by a baseline-category mixed-link model and the other group of J-k ≥ 3 categories are controlled by a continuation-ratio mixed-link model. The two groups share the same baseline category J. More specifically, let 1≤ k ≤ J-3 andρ_ij = {[π_ij/π_ij + π_iJ j=1, …, k; π_ij/π_ij +⋯ + π_iJ j=k+1, …, J-1 ].As for link functions, a special case is g_1 = ⋯ = g_k = g_a and g_k+1 = ⋯ = g_J-1=g_b . Then 𝐋 = I_J-1, 𝐑 = [[ I_k; 1 1 ⋯ 1; 1 ⋯ 1; ⋱ ⋮; 1 ]] ∈ℝ^(J-1)× (J-1)𝐛 = 1_J-1. Then 𝐃_i^-1 =[[ ρ_i1/1-ρ_i1; ⋱; ρ_ik/1-ρ_ik;ρ_i,k+1/∏_l=k+1^k+1 (1-ρ_il) ρ_i,k+1ρ_i,k+2/∏_l=k+1^k+2 (1-ρ_il) ⋯ ρ_i,k+1ρ_i,J-2/∏_l=k+1^J-2 (1-ρ_il) ρ_i,k+1ρ_i,J-1/∏_l=k+1^J-1 (1-ρ_il);ρ_i,k+2/∏_l=k+2^k+2 (1-ρ_il) ⋯ ρ_i,k+2ρ_i,J-2/∏_l=k+2^J-2 (1-ρ_il) ρ_i,k+2ρ_i,J-1/∏_l=k+2^J-1 (1-ρ_il); ⋱ ⋮ ⋮;ρ_i,J-2/∏_l=J-2^J-2 (1-ρ_il) ρ_i,J-2ρ_i,J-1/∏_l=J-2^J-1 (1-ρ_il);ρ_i,J-1/∏_l=J-1^J-1 (1-ρ_il) ]]All elements of 𝐃_i^-1𝐛 = (ρ_i1/1-ρ_i1,…, ρ_ik/1-ρ_ik, ρ_i,k+1/∏_l=k+1^J-1 (1-ρ_il),…,ρ_i,J-1/∏_l=J-1^J-1 (1-ρ_il))^T ∈ℝ^J-1are positive. It can be verified that 1_J-1^T 𝐃_i^-1𝐛 = ∑_l=1^k ρ_il/1-ρ_il + ∏_l=k+1^J-1 (1-ρ_il)^-1 - 1 §.§ More on two-group models in Example <ref>For the two-group model introduced in Example <ref>, the 𝐋 matrix is exactly the same as in Section <ref>, <ref> and <ref> for the corresponding link models. The 𝐑 matrix can be obtained by adding more 1's to the corresponding 𝐑 matrix in Section <ref>, <ref> or <ref>. More specifically, we only need to change the 0's at the (1,s), (2, s), …, (k,s)th entries of 𝐑 to 1's.The vector 𝐛 = (b_1, …, b_J-1)^T is quite different though. Actually,𝐛 = {[ (0_k^T, 1_J-k-1^T)^T ; (0_J-2^T, 1)^T ; (0_k^T, 1_J-k-1^T)^T].where 0_k = (0, …, 0)^T ∈ℝ^k and 1_k = (1, …, 1)^T ∈ℝ^k.As an illustrative example, a baseline-cumulative mixed-link model with J=5, k=1, and s=3 has two groups of categories {1,3} (with baseline 3) and {2, 3, 4, 5} (with baseline 5), as well as 𝐋 = [[ 1; 1; 1 1; 1 1 1 ]], 𝐑 = [[ 1 1; 1 1 1; 1 1 1; 1 1 1 ]], 𝐛 = ([ 0; 1; 1; 1 ]) § MORE ON FISHER INFORMATION MATRIX In this section, we provide more technical details about Section <ref>.Recall that given distinct 𝐱_i, i=1,⋯,m, we have independent multinomial responses𝐘_i=(Y_i1,⋯,Y_iJ)^T ∼ Multinomial(n_i; π_i1,⋯,π_iJ)where n_i=∑_j=1^J Y_ij. The log-likelihood for the multinomial model isl(θ) =log∏_i=1^m n_i!/Y_i1!⋯Y_iJ!π_i1^Y_i1⋯π_iJ^Y_iJ =∑_i=1^m 𝐘_i^Tlogπ̅_i + ∑_i=1^m log (n_i!) - ∑_i=1^m∑_j=1^J log (Y_ij!)where π̅_i = (π_i1, …, π_iJ)^T = (π_i^T, π_iJ)^T, logπ̅_i=(logπ_i1, ⋯,logπ_iJ)^T.Recall that g_1^-1, …, g_J-1^-1 are all differentiable. Then the score vector∂ l/∂θ^T = ∑_i=1^m𝐘_i^T diag(π̅_i)^-1∂π̅_i/∂θ^Twith∂π̅_i/∂θ^T = ∂π̅_i/∂ρ_i^T·∂ρ_i/∂η_i^T·∂η_i/∂θ^T = ∂π̅_i/∂ρ_i^T· diag((𝐠^-1)'(η_i)) ·𝐗_iwhere ρ_i = (ρ_i1, …, ρ_i,J-1)^T ∈ℝ^J-1, η_i = (η_i1, …, η_i,J-1)^T ∈ ℝ^J-1, diag((𝐠^-1)'(η_i)) =diag{(g_1^-1)'(η_i1), …, (g_J-1^-1)'(η_i, J-1)}∈ℝ^(J-1)× (J-1). As for ∂π̅_i/∂ρ_i^T, Lemma <ref> provides a formula for ∂π_i/∂ρ_i^T.Suppose θ∈Θ. Then ∂π_i/∂ρ_i^T = (I_J-1 - π_i 1_J-1^T) 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2) Proof of Lemma <ref>:Applying the chain rule of vector differentiation and matrix differentials (see, for example, Chapter 17 in <cit.>) to (<ref>), we obtain∂π_i/∂ρ_i^T = ∂π_i/∂ (𝐃_i^-1𝐛)^T·∂ (𝐃_i^-1𝐛)/∂ρ_i^T= 1/1 + 1^T_J-1𝐃_i^-1𝐛(I_J-1 - π_i 1_J-1^T) ·𝐃_i^-1 diag(𝐋𝐃_i^-1𝐛)diag(ρ_i^-2) = (I_J-1 - π_i 1_J-1^T) ·𝐃_i^-1 diag(𝐋π_i)diag(ρ_i^-2)Since π_iJ = 1 - 1_J-1^T π_i, ∂π_iJ/∂ρ_i^T = -1_J-1^T ∂π_i/∂ρ_i^T = (0_J-1^T - π_iJ1_J-1^T) 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2)By combining ∂π_i/∂ρ_i^T and ∂π_iJ/∂ρ_i^T, we obtain∂π̅_i/∂ρ_i^T = 𝐄_i 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2)where 𝐄_i is defined in (<ref>). π̅_i^Tdiag(π̅_i)^-1𝐄_i= 1_J^T 𝐄_i = 0.Proof of Lemma <ref>: Since π_i1 + ⋯ + π_iJ =1, then π̅_i^Tdiag(π̅_i)^-1𝐄_i = 1_J^T 𝐄_i = 1_J-1^T - 1_J-1^T = 0_J-1^T. Since the product of two diagonal matrices is exchangeable, thendiag(𝐋π_i) · diag(ρ_i^-2) =diag((1-ρ_i)/ρ_i) · diag(𝐋π_i) · diag(ρ_i (1-ρ_i))^-1where diag((1-ρ_i)/ρ_i) =diag{(1-ρ_i1)/ρ_i1, …, (1-ρ_i,J-1)/ρ_i,J-1} anddiag(ρ_i (1-ρ_i))^-1 =diag{ρ_i1^-1 (1-ρ_i1)^-1, …, ρ_i,J-1^-1 (1-ρ_i,J-1)^-1}. Thus an equivalent formula of (<ref>) is∂π̅_i/∂ρ_i^T = 𝐄_i 𝐃_i^-1· diag(1-ρ_i/ρ_i) · diag(𝐋π_i) · diag(ρ_i (1-ρ_i))^-1It can be verified that 𝐄_i 𝐃_i^-1· diag((1-ρ_i)/ρ_i) · diag(𝐋π_i) is consistent with the first (J-1) columns of (𝐂^T 𝐃_i^-1𝐋)^-1 in <cit.> for multinomial logitistic models, although their notations 𝐃_i and 𝐋 are different from here. Therefore, Lemma S.5 in the Supplementary Materials of <cit.> is a direct conclusion of Lemma <ref> here.As another direct conclusion of Lemma <ref>,E(∂ l/∂θ^T) = ∑_i=1^m n_i π̅_i^T diag(π̅_i)^-1∂π̅_i/∂θ^T = 0§ SUMMARY OF NOTATIONS FOR MULTINOMIAL LINK MODELS In this section, we summarize the notations for specifying a multinomial link model.A general multinomial link model takes its matrix form as in (<ref>) or its equation form as in (<ref>). It consists of two components. The left hand side g_j(𝐋^T_j π_i/𝐑^T_j π_i + π_iJ b_j) of (<ref>) indicates the model is a baseline-category mixed-link model (see Section <ref>), a cumulative mixed-link model (see Section <ref>), an adjacent-categories mixed-link model (see Section <ref>), a continuation-ratio mixed-link model (see Section <ref>), a baseline-cumulative mixed-link model (see Section <ref>), a baseline-adjacent mixed-link model (see Section <ref>), a baseline-continuation mixed-link model (see Section <ref>), or others. The right hand side𝐟_j^T(𝐱_i) θ of (<ref>) indicates that the model is with proportional odds (po, with β_j+𝐡_c^T(𝐱_i)ζ, j=1, …, J-1), nonproportional odds (npo, with 𝐡_j^T(𝐱_i)β_j), partial proportional odds (ppo, with 𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ, see Example <ref>), po-npo mixture (with 𝐡_j^T(𝐱_i)β_j+𝐡_cj^T(𝐱_i)ζ, see Example <ref>), or other structures. Overall, the model is called, for example, a cumulative mixed-link model with proportional odds, a baseline-cumulative mixed-link model with po-npo mixture, etc. To specify such a model, we need to know(i) Constant matrices 𝐋, 𝐑∈ℝ^(J-1)× (J-1) and constant vector 𝐛∈ℝ^J-1;(ii) Link functions g_1, …, g_J-1, as well as their inverses g_1^-1, …, g_J-1^-1 and the corresponding first-order derivatives (g_1^-1)', …, (g_J-1^-1)' (see Table <ref> for relevant formulae);(iii) Predictor functions 𝐟_j(𝐱_i) = (f_j1(𝐱_i), …, f_jp(𝐱_i))^T with θ = (θ_1, …, θ_p)^T in general;𝐡_j(𝐱_i) = (h_j1(𝐱_i), …, h_jp_j(𝐱_i))^T, j=1, …, J-1 and 𝐡_c(𝐱_i) = (h_1(𝐱_i), …, h_p_c(𝐱_i))^T or 𝐡_cj(𝐱_i) = (h_cj1(𝐱_i), …, h_cjp_c(𝐱_i))^T with parameters θ = (β_1^T, …, β_J-1^T, ζ^T)^T ∈ℝ^p_1 + ⋯ + p_J-1 + p_c for ppo model or po-npo mixture model; 𝐡_j(𝐱_i) ≡ 1, j=1, …, J-1, p_1=⋯ = p_J-1=1 with θ = (β_1, …, β_J-1, ζ^T)^T ∈ℝ^J-1+p_cfor po models; 𝐡_c(𝐱_i) ≡ 0, p_c=0 and θ = (β_1^T, …, β_J-1^T)^T ∈ℝ^p_1 + ⋯ + p_J-1 for npo models.Once (i), (ii) and (iii) are given, the model is specified. We can further calculate(iv) The model matrix 𝐗_i ∈ℝ^(J-1)× p according to (<ref>) for general models. Special cases include (<ref>) for po-npo mixture model (see Example <ref>), (<ref>) for ppo model,𝐗_i= [ 1 𝐱_i^T; ⋱ ⋮; 1 𝐱_i^T ]∈ℝ^(J-1) × (d+J-1)for main-effects po models (in this case, p_1 = ⋯ = p_J-1=1, p_c = d, p=d+J-1), 𝐗_i= [ 1 𝐱_i^T; ⋱; 1 𝐱_i^T ]∈ℝ^(J-1) × (d+1)(J-1)for main-effects npo models (in this case, p_1 = ⋯ = p_J-1 = d+1, p_c=0, p=(d+1)(J-1)). § PROOFSProof of Lemma <ref> Since ρ_i = (𝐋π_i)/(𝐑π_i + π_iJ𝐛) and π_iJ = 1 - π_i1 - ⋯ -π_i,J-1 = 1 - 1_J-1^T π_i, we have𝐋π_i =diag(ρ_i) (𝐑π_i + π_iJ𝐛)⟺diag(ρ_i^-1) 𝐋π_i =𝐑π_i + π_iJ𝐛 = 𝐑π_i - 𝐛1_J-1^T π_i +𝐛⟺ [ diag(ρ_i^-1) 𝐋 - 𝐑 + 𝐛1_J-1^T]π_i =𝐛⟺ (𝐃_i + 𝐛1_J-1^T)π_i =𝐛According to the Sherman-Morrison-Woodbury formula (see, for example, Section 2.1.4 in <cit.>), (𝐃_i + 𝐛1_J-1^T)^-1 exists if 𝐃_i^-1 exists and 1 + 1_J-1^T 𝐃_i^-1𝐛≠ 0, which is guaranteed since all components of 𝐃_i^-1𝐛 are positive, and thusπ_i= (𝐃_i + 𝐛1_J-1^T)^-1𝐛= [𝐃_i^-1 - 𝐃_i^-1𝐛(1 + 1_J-1^T 𝐃_i^-1𝐛)^-11_J-1^T 𝐃_i^-1] 𝐛= 𝐃_i^-1𝐛 - 𝐃_i^-1𝐛·(1 + 1_J-1^T 𝐃_i^-1𝐛)^-11_J-1^T 𝐃_i^-1𝐛= 𝐃_i^-1𝐛/1 + 1_J-1^T 𝐃_i^-1𝐛The rest conclusions are straightforward. Proof of Theorem <ref> For baseline-category, adjacent-categories, and continuation-ratio mixed-link models, 𝐃_i^-1 always exists and all coordinates of 𝐃_i^-1𝐛 are positive (see Section <ref>), which implies Θ = ℝ^p for these models. For cumulative mixed-link models (see Section <ref>),𝐃_i^-1 always exists, and 𝐃_i^-1𝐛 = (1-ρ_i,J-1)^-1 (ρ_i1, ρ_i2 - ρ_i1, …,ρ_i,J-2 - ρ_i,J-3, ρ_i,J-1 - ρ_i,J-2)^TSince ρ_ij = g_j^-1(𝐡_j^T(𝐱_i)β_j+𝐡_c^T(𝐱_i)ζ) ∈ (0,1), Θ = {θ∈ℝ^p |ρ_i1 < ⋯ < ρ_i,J-1, i=1, …, m}.If g_1 = ⋯ = g_J-1 = g is strictly increasing, then ρ_ij < ρ_i,j+1 is equivalent to 𝐡_j^T(𝐱_i)β_j < 𝐡_j+1^T(𝐱_i)β_j+1; if g is strictly decreasing, then ρ_ij < ρ_i,j+1 is equivalent to 𝐡_j^T(𝐱_i)β_j > 𝐡_j+1^T(𝐱_i)β_j+1. Then the conclusions follow.Proof of Theorem <ref> As described in Section <ref>, the score vector∂ l/∂θ^T = ∑_i=1^m𝐘_i^T diag(π̅_i)^-1∂π̅_i/∂θ^Twith∂π̅_i/∂θ^T = ∂π̅_i/∂ρ_i^T·∂ρ_i/∂η_i^T·∂η_i/∂θ^T = ∂π̅_i/∂ρ_i^T· diag((𝐠^-1)'(η_i)) ·𝐗_iand (see Lemma <ref> in Section <ref>)∂π̅_i/∂ρ_i^T = 𝐄_i 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2)Then ∂π̅_i/∂θ^T = 𝐄_i 𝐃_i^-1· diag(𝐋π_i) · diag(ρ_i^-2) · diag((𝐠^-1)'(η_i)) ·𝐗_iAs another direct conclusion of Lemma <ref> in Section <ref>,E(∂ l/∂θ^T) = ∑_i=1^m n_i π̅_i^T diag(π̅_i)^-1∂π̅_i/∂θ^T = 0Since E(∂ l/∂θ^T) = 0, the Fisher information matrix (see, for example,(1995, Section 2.3.1)) can be defined as𝐅 =E(∂ l/∂θ·∂ l/∂θ^T)=E(∑_i=1^m(∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1𝐘_i·∑_j=1^m𝐘_j^T diag(π̅_j)^-1∂π̅_j/∂θ^T)=E(∑_i=1^m∑_j=1^m(∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1𝐘_i𝐘_j^T diag(π̅_j)^-1∂π̅_j/∂θ^T) = ∑_i=1^m∑_j=1^m(∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1 E(𝐘_i𝐘_j^T)diag(π̅_j)^-1∂π̅_j/∂θ^TwhereE(𝐘_i𝐘_j^T) = {[ n_i(n_i-1)π̅_iπ̅_i^T+n_i diag(π̅_i) i = j;n_in_jπ̅_iπ̅_j^Ti≠ j ].According to Lemma <ref>, π̅_i^Tdiag(π̅_i)^-1·∂π̅_i/∂θ^T = 0, the Fisher information matrix𝐅 = ∑_i=1^m(∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1· n_idiag(π̅_i) · diag(π̅_i)^-1∂π̅_i/∂θ^T= ∑_i=1^m n_i (∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1∂π̅_i/∂θ^TProof of Lemma <ref> Since θ∈Θ, 𝐃_i^-1 exists and nonsingular. Due to ρ_ij∈ (0,1), j=1, …, J-1, all coordinates of 𝐋π_i are nonzero and both diag(𝐋π_i) and diag(ρ_i^-2) are nonsingular. Since (g_j^-1)'(η_ij) ≠ 0, j=1, …, J-1, diag((𝐠^-1)'(η_i)) is nonsingular as well. According to Theorem <ref>, the only thing left is to verify that 𝐄_i^Tdiag(π̅_i)^-1𝐄_i is nonsingular. Actually, it can be verified that| 𝐄_i^Tdiag(π̅_i)^-1𝐄_i| = |diag(π_i)^-1 - 1_J-11_J-1^T | = π_iJ∏_l=1^J-1π_il^-1≠ 0A lemma needed for the proof of Theorem <ref>: Suppose θ∈Θ, (g_j^-1)'(η_ij) ≠ 0 and n_i > 0 for all i=1, …, m and j=1, …, J-1. Then 𝐔 is positive definite and |𝐔| = (∏_i=1^m n_i)^J-1·∏_i=1^m |𝐔_i|, where|𝐔_i| = [∏_l=1^J-1(g_l^-1)'(η_il)]^2 ·∏_l=1^J-1ρ_il^-4· |𝐃_i^-1|^2 · | diag (𝐋π_i)|^2 ·π_iJ∏_l=1^J-1π_il^-1 Proof of Lemma <ref> We denote an mJ× m(J-1) matrix 𝐂̃ = ( diag{𝐜_11, …, 𝐜_m1}, …, diag{𝐜_1,J-1, …, 𝐜_m,J-1}) and an mJ× mJ matrix 𝐖̃ =diag{ n_1diag(π̅_1)^-1, …, n_mdiag(π̅_m)^-1}. Then 𝐔 = 𝐂̃^T 𝐖̃𝐂̃.If θ∈Θ and n_i > 0 for all i=1, …, m, then 𝐖̃ is positive definite.By rearranging columns, we can verify that rank(𝐂̃) =rank(diag{𝐂_1, …, 𝐂_m}) = m(J-1) given that (g_j^-1)'(η_ij) ≠ 0 for all i=1, …, m and j=1, …, J-1. That is, 𝐂̃ is of full rank, and thus 𝐔 is positive definite.As a direct conclusion of Theorem S.4 in the Supplementary Materials of <cit.>, |𝐔| = (∏_i=1^m n_i)^J-1·∏_i=1^m |𝐔_i|, where 𝐔_i = 𝐂_i^Tdiag (π̅_i)^-1𝐂_i in our case. Then|𝐔_i|= | diag((𝐠^-1)'(η_i)) |^2 ·| diag(ρ_i^-2) |^2 ·|𝐃_i^-1|^2 ·| diag(𝐋π_i)|^2 ·|𝐄_i^Tdiag (π̅_i)^-1𝐄_i| = [∏_l=1^J-1(g_l^-1)'(η_il)]^2 ·∏_l=1^J-1ρ_il^-4· |𝐃_i^-1|^2 · | diag (𝐋π_i)|^2 ·π_iJ∏_l=1^J-1π_il^-1 Proof of Theorem <ref> According to Theorem <ref>, 𝐅=∑_i=1^mn_i𝐅_i with 𝐅_i = (∂π̅_i/∂θ^T)^Tdiag(π̅_i)^-1∂π̅_i/∂θ^T = (𝐂_i 𝐗_i)^Tdiag(π̅_i)^-1𝐂_i 𝐗_i = 𝐗_i^T 𝐔_i 𝐗_iwhere 𝐗_i = (f_jl(𝐱_i)) ∈ℝ^(J-1)× p and 𝐔_i = (u_st(π_i))_s,t=1, …, J-1. Then 𝐅_i can be rewritten as𝐅_i = [[ ∑_l=1^J-1∑_j=1^J-1 f_j1(𝐱_i) u_jl(π_i) f_l1(𝐱_i)⋯ ∑_l=1^J-1∑_j=1^J-1 f_j1(𝐱_i) u_jl(π_i) f_lp(𝐱_i);⋮⋯⋮; ∑_l=1^J-1∑_j=1^J-1 f_jp(𝐱_i) u_jl(π_i) f_l1(𝐱_i)⋯ ∑_l=1^J-1∑_j=1^J-1 f_jp(𝐱_i) u_jl(π_i) f_lp(𝐱_i) ]]Using (<ref>) and 𝐔 = (𝐔_st)_s,t=1, …, J-1 with 𝐔_st =diag{n_1 u_st(π_1), …, n_m u_st(π_m)}, it can be verified that𝐇𝐔𝐇^T= [[ ∑_l=1^J-1∑_j=1^J-1𝐅_j1^T 𝐔_jl𝐅_l1 ⋯ ∑_l=1^J-1∑_j=1^J-1𝐅_j1^T 𝐔_jl𝐅_lp; ⋮ ⋯ ⋮; ∑_l=1^J-1∑_j=1^J-1𝐅_jp^T 𝐔_jl𝐅_l1 ⋯ ∑_l=1^J-1∑_j=1^J-1𝐅_jp^T 𝐔_jl𝐅_lp ]]= [[ ∑_i=1^m ∑_l=1^J-1∑_j=1^J-1 n_i f_j1(𝐱_i) u_jl(π_i) f_l1(𝐱_i)⋯ ∑_i=1^m ∑_l=1^J-1∑_j=1^J-1 n_i f_j1(𝐱_i) u_jl(π_i) f_lp(𝐱_i);⋮⋯⋮; ∑_i=1^m ∑_l=1^J-1∑_j=1^J-1 n_i f_jp(𝐱_i) u_jl(π_i) f_l1(𝐱_i)⋯ ∑_i=1^m ∑_l=1^J-1∑_j=1^J-1 n_i f_jp(𝐱_i) u_jl(π_i) f_lp(𝐱_i) ]]= ∑_i=1^m n_i 𝐅_i= 𝐅The rest statement is a direct conclusion of Lemma <ref>. | http://arxiv.org/abs/2312.16260v1 | {
"authors": [
"Tianmeng Wang",
"Liping Tong",
"Jie Yang"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20231226055613",
"title": "Multinomial Link Models"
} |
LA-UR-23-34188 [email protected] Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, JapanTheoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USAFaculty of Science and Engineering, Waseda University, Tokyo 169-8555, JapanWe present a new subgrid model for neutrino quantum kinetics, which is primarily designed to incorporate effects of collective neutrino oscillations into neutrino-radiation-hydrodynamic simulations for core-collapse supernovae and mergers of compact objects. We approximate the neutrino oscillation term in quantum kinetic equation by Bhatnagar–Gross–Krook (BGK) relaxation-time prescription, and the transport equation is directly applicable for classical neutrino transport schemes. The BGK model is motivated by recent theoretical indications that non-linear phases of collective neutrino oscillations settle into quasi-steady structures. We explicitly provide basic equations of the BGK subgrid model for both multi-angle and moment-based neutrino transport to facilitate the implementation of the subgrid model in the existing neutrino transport schemes. We also show the capability of our BGK subgrid model by comparing to fully quantum kinetic simulations for fast neutrino-flavor conversion. We find that the overall properties can be well reproduced in the subgrid model; the error of angular-averaged survival probability of neutrinos is within ∼ 20 %. By identifying the source of error, we also discuss perspectives to improve the accuracy of the subgrid model.BGK subgrid model for neutrino quantum kinetics Masamichi Zaizen January 14, 2024 =============================================== § INTRODUCTIONAstrophysical phenomena usually involve intricately intertwined multiphysics. Direct numerical simulation is an effective tool to study the physical mechanism behind these complex phenomena, and also to provide theoretical models for interpretations of observed data. Ofttimes, however, the temporal- and spatial scales among different physical processes span many orders of magnitudes, rendering the first-principles simulations prohibitively computationally expensive. This exhibits the need for approximations or coarse-grained approaches. It has been recognized for many years that neutrino quantum kinetics in core-collapse supernova (CCSN) and mergers of compact objects represented by binary neutron star merger (BNSM) corresponds to such a problem requiring coarse-grained treatments (see reviews in <cit.>). Neutrino flavor conversion is a representative quantum feature, and various types of neutrino flavor conversions associated with neutrino self-interactions occur in CCSNe <cit.> and BNSMs <cit.>. On the other hand, the length scale of flavor conversions is extremely smaller than the astrophysical size, making the first-principles simulations intractable. Although neutrino-radiation-hydrodynamic simulations have matured significantly, one should keep in mind that large uncertainties still remain concerning impacts of neutrino flavor conversions even in the current state-of-the-art numerical simulations. Since neutrino-matter interactions depend on neutrino flavors, flavor conversions change the feedback to the fluid dynamics <cit.> and also nucleosynthesis <cit.>. We also note that the dynamics of flavor conversion and its asymptotic behavior hinge on global advection of neutrinos <cit.>, exhibiting that global neutrino-radiation-hydrodynamic simulations with incorporating effects of flavor conversions are mandatory to study the astrophysical consequence of flavor conversions.There are respectable previous work that incorporate effects of neutrino flavor conversion in global neutrino-radiation-hydrodynamic simulations in CCSNe and BNSMs <cit.>. Although the details vary, they commonly add a neutrino-mixing prescription on top of their classical neutrino transport schemes, in which they shuffle neutrino flavors one way or another. It should be noted that all mixing schemes employ rather phenomenological treatments and, hence, these results need to be considered provisional. This is mainly because the current implementation of flavor conversion in their codes are rather schematic, which does not have the ability to draw robust conclusions about impacts of flavor conversions. Improving their neutrino mixing schemes is obviously needed, but it is very hard along with proposed approaches. More importantly, it is not clear how we can give feedback from the results of fully quantum kinetic neutrinos to these phenomenological models. This paper is meant to address this issue and to provide a new way to fill the gap between phenomenological and first-principle simulations. In this paper, we propose another coarse-grained neutrino transport approach: subgrid-scale modeling for neutrino flavor conversions. We distinguish our method from other phenomenological approaches, since the method is designed so as to reproduce the spatially- and time-averaged features of neutrino flavor conversions obtained from quantum kinetic neutrino simulations. The noticeable advantage in our subgrid model is having a refinable formulation for dynamics of flavor conversions by various ways including analytic methods <cit.> and artificial intelligence (AI) techniques <cit.>. In this paper, we also demonstrate classical neutrino transport simulations with the subgrid model, in which we employ a simple but physically motivated subgrid model for flavor conversions.This paper is organized as follows. In Sec. <ref>, we start with explaining the philosophy of our proposed method. We then provide the quantum kinetic equation with our subgrid model. We also provide its two-moment formalism in Sec. <ref>. These transport equations are written in terms of the 3+1 general relativistic formulation, which would be helpful for those who work on CCSN and BNSM simulations. After we discuss some details of the method in Sec. <ref>, we highlight novelties of our subgrid model by comparing to other phenomenological approaches in Sec. <ref>. In Sec. <ref>, we also discuss the relevance to another coarse-grained approach: miscidynamics <cit.>. As shall be shown in the section, this formulation is closely associated with our formulation, indicating that both approaches are complementary to each other. To show the capability of our subgrid model, we demonstrate numerical simulations by using both quantum kinetic neutrino transport and classical one with subgrid model, paying attention to fast neutrino-flavor conversion (FFC) in Sec. <ref>. By comparing their results, we can learn the source of error in the subgrid model. We then discuss strategies how to improve them based on studies of quantum kinetic neutrino transport. Finally, we summarize our work in Sec. <ref>. Otherwise stated, we work in the unit with c = ħ = 1, where c and ħ are the speed of the light and the reduced Planck constant, respectively. In this paper, we will describe all equations with the metric signature of - + + +. § BASIC EQUATION FOR NEUTRINO TRANSPORT WITH BGK SUBGRID MODELINGIt has been discussed that neutrino flavor conversions have quasi-steady and asymptotic behaviors in the non-linear phase <cit.> or quasi-periodic properties represented as pendulum motions in flavor space <cit.>. We are interested in the time- and spatially averaged states in the late non-linear phase, since it is unlikely that fine structures with short-time or small-length variations affect astrophysical consequences. Motivated by these studies, we assume that flavor conversions make the radiation field settle into an asymptotic state, and the asymptotic density matrix of neutrinos is denoted by f^a. In general, the non-linear evolution of flavor conversions is very complex, and the detail hinges on flavor instabilities, neutrino-matter interactions, and global geometries of radiation fields. On the other hand, there is always a characteristic timescale of flavor conversions or associated flavor instabilities, which is denoted by τ_a in the following discussion. We note that the timescale depends on neutrino energy, angle, and neutrino flavor. τ_a also provides a rough estimation of timescale that the density matrix of neutrinos settles into f^a. The quantum kinetic equation (QKE) for neutrino transport can be written asp^μ∂ f/∂ x^μ + dp^i/dτ∂ f/∂ p^i = - p^μ u_μ S + i p^μ n_μ [H,f],where f denotes the density matrix of neutrinos. In the expression, p^μ, x^μ, and τ denote neutrino four momentum, spacetime coordinates, and affine parameter for trajectories of neutrinos, respectively. u^μ, n^ν, S, and H appearing in the right hand side of Eq. <ref> represent four-velocity of fluid, the unit vector normal to the spatial hypersurface in four dimensional spacetimes, collision term, and neutrino oscillation Hamiltonian, respectively. Below, we approximate Eq. <ref> by using f^a and τ_a.Our subgrid model is developed based on an assumption that the neutrino distributions are relaxed to f^a by flavor conversions in the timescale of τ_a. This corresponds to a relaxation-time approximation proposed by Bhatnagar–Gross–Krook (BGK) <cit.>, in which they use the approximation to collision term in Boltzmann equation for gas dynamics. In our BGK subgrid model, we apply the model to the neutrino oscillation Hamiltonian (the second term in the right hand side of Eq. <ref>),p^μ∂ f/∂ x^μ + dp^i/dτ∂ f/∂ p^i = - p^μ u_μ S + p^μ n_μ1/τ_a ( f - f^a ).We note that the relaxation-time (τ_a) is measured in laboratory (or n) frame, but it can be changed based on the fluid rest frame (see also <cit.>), which may be useful for the frequently used two-moment formalism for neutrino transport (see Sec. <ref>). It should also be noted that f^a and τ_a are determined from f at each time step, implying that they are time-dependent quantities. It is worth noting that a similar approximation was used to obtain a temporally coarse-grained quantum kinetic equation for the production of sterile neutrinos (see Eqs. 4 and 5 of <cit.>). There it was proposed that the entire right-hand side, including both oscillation and collision terms, be treated using a BGK approximation. This ansatz showed excellent agreement with numerical results. Here we adapt the relaxation-time approximation to the context of collective neutrino oscillations by proposing that it can be applied to oscillations alone, with subgrid relaxation being caused by collective modes rather than collisions.From a practical point of view, we also provide a conservative form of Eq. <ref>, which is used for numerical simulations for both Boltzmann- and quantum kinetic neutrino transport (see, e.g., <cit.>). Following <cit.>, we can rewrite the transport equation as,1/√(-g). ∂/∂ x^α|_q_i[( n^α + ∑^3_i=1ℓ_i e^α_(i)) √(-g) f ]- 1/ε^2∂/∂ε( ε^3 f ω_(0)) + 1/sinθ_ν∂/∂θ_ν ( sinθ_ν f ω_(θ_ν) )+ 1/sin^2 θ_ν∂/∂ϕ_ν (fω_(ϕ_ν))= D S - 1/τ_a ( f - f^a ).In the expression, ε and g are the neutrino energy measured from e^α_(0)=n^α observer, i.e., ε≡ - p_α n^α, and the determinant of the four-dimensional metric, respectively. θ_ν and ϕ_ν denote the neutrino flight direction in the laboratory (or n) frame. e^α_(i) (i = 1, 2, 3) denote a set of the (spatial) tetrad bases normal to n. D in the right hand side of Eq. <ref> represents the effective Doppler factor, which is defined as D ≡ν / ε with ν≡ - p^μ u_μ, while ν denotes the neutrino energy measured in the fluid rest frame. ω_(0), ω_(θ_ν), ω_(ϕ_ν) appearing in the left hand side of Eq. <ref> can be written as,ω_(0)≡ε^-2 p^α p_β∇_α n^β, ω_(θ_ν)≡∑^3_i=1ω_i∂ℓ_(i)/∂θ_ν, ω_(ϕ_ν)≡∑^3_i=2ω_i∂ℓ_(i)/∂ϕ_ν, ω_i≡ε^-2 p^α p_β∇_α e^β_(i).Spherical polar coordinate is often employed in multi-angle neutrino transport codes (see e.g., <cit.>). We, hence, chose a set of tetrad basis, _(i) as,e^α_(1) = (0, γ^-1/2_rr, 0, 0 )e^α_(2) = (0, -γ^-1/2_r θ/√(γ_rr (γ_rrγ_θθ - γ^2_r θ)), √(γ_rr/γ_rrγ_θθ - γ^2_r θ), 0 )e^α_(3) = (0, γ^r ϕ/√(γ^ϕϕ) , γ^θϕ/√(γ^ϕϕ), √(γ^ϕϕ)),where γ^αβ≡ g^αβ + n^α n^β. One thing we do notice here is that Eq. <ref> (or <ref>) corresponds to a classical transport equation, if we neglect the off-diagonal elements. Since the main purpose of this study is to provide a subgrid model of neutrino flavor conversion for classical neutrino transport schemes, we limit our discussion only for the classical transport with BGK subgrid model. One should keep in mind that the subgrid model can be applied to neutrino quantum kinetics, and appropriate modeling of off-diagonal components would increase the physical fidelity of subgrid model. This is an intriguing possibility and deserves further investigations, although we postpone the study to future work. Below, let us consider how to determine diagonal components of f^a. It is well known that the lepton number of neutrinos/antineutrinos does not change during flavor conversions. This indicates that we can characterize f^a via survival probability of neutrinos (η), while it depends on neutrino energy and flight angle, in general. Following the prescriptions in <cit.>, we can write f^a in terms of f as,f_e^a= ηf_e+ (1- η) f_x , f_x^a= 1/2(1- η) f_e+ 1/2(1+η)f_x,where f_e and f_x represent distribution functions (or diagonal elements of density matrix) for electron-type and heavy-leptonic type neutrinos, respectively. We note that μ and τ neutrinos are assumed to be the same in Eq. <ref>, which is a reasonable assumption for CCSNe and BNSMs. However, they are quantitatively different from each other, in particular for high energy neutrinos (see, e.g., <cit.>), due to high-order corrections in neutrino-matter interactions (e.g., weak-magnetism <cit.>). We also note that, if on-shell muons appear <cit.>, we should distinguish μ- and τ neutrinos. We can deal with these cases by introducing another parameter to represent neutrino mixing. For antineutrinos, we can use the same form as Eq. <ref> but replacing f and η to f̅ and η̅, respectively.There are two important remarks about our BGK subgrid model. First, f^a (or η) hinges on flavor instabilities, and it should be determined (or calibrated) based on neutrino quantum kinetics. It is important to note that the results from analytic studies and local simulations of flavor conversions can be directly used to determine it. In Sec. <ref>, we demonstrate such simulations for FFC. Second, if the system contains multiple flavor instabilities, we can handle the problem with multiple BGK terms. More specifically, the second term in right hand side of Eq. <ref> can be rewritten as,p^μ n_μ1/τ_a ( f - f^a ) → p^μ n_μ∑_i=1^n1/τ_a_i ( f - f^a_i )where the index i distinguish flavor instabilities among N modes. This prescription may be important for realistic CCSN and BNSM models, since FFC and collisional flavor instabilities (CFI) may occur simultaneously (see, e.g., <cit.>) at the same position. The extension by Eq. <ref> allows us to study the situation where multiple flavor instabilities are competing to each other. Before we discuss how to estimate τ_a in Sec. <ref>, let us describe the two-moment transport formalism for our subgrid model in the next section. This is helpful for those who use the moment formalism for numerical modeling of CCSNe and BNSMs.§ TWO-MOMENT FORMALISMMoment formalism of radiation transport has, in principle, the ability to describe full neutrino kinetics with equivalent level of Boltzmann (or fully quantum kinetic) neutrino transport. In practice, however, the moment formalism results in infinite hierarchy of coupled equations, indicating that we need to truncate the hierarchy of moments at a certain rank. The currently most popular approach in neutrino transport simulations is two-moment formalism <cit.>, in which the zeroth and first angular moments correspond to fundamental variables. We determine their time evolution and spatial distributions by solving their coupling equations, while higher-rank moments are complemented by closure relations. It is worth noting that the moment formalism is also used for the study of neutrino flavor conversions <cit.>. In this section, we provide an explicit description of two-moment formalism with BGK subgrid model.Following the convention of <cit.>, we decompose the neutrino four momentum (p^α) into u^α and its orthogonal normal vector (ℓ^α) as,p^μ = ν (u^α + ℓ^α),while the conditions of ℓ^α u_α=0 and ℓ^αℓ_α = 1 are satisfied. The unprojected second- and third rank moments of neutrinos are defined as (see also <cit.>),M^αβ ≡ν^3 ∫ f (u^α + ℓ^α) (u^β + ℓ^β) d Ω , M^αβγ ≡ν^3 ∫ f (u^α + ℓ^α) (u^β + ℓ^β) (u^γ + ℓ^γ) d Ω ,where Ω denotes the solid angle of neutrino momentum space defined in the fluid-rest frame. It should be mentioned that the integral of M^αβ over the neutrino energy (∫ M^αβ d ν) corresponds to the energy-momentum tensor of neutrinos. We also define the zeroth and first angular moments defined in the fluid-rest frame as,J≡ν^3∫ f d Ω ,H^α ≡ν^3∫ℓ^α f d Ω, L^αβ ≡ν^3∫ℓ^αℓ^β f d Ω, N^αβγ ≡ν^3∫ℓ^αℓ^βℓ^γ f d Ω,By using these variables, the basic equation for the two-moment formalism with BGK subgrid model can be written as (see also Eq. <ref>),∇_β M^αβ - ∂/∂ν(ν M^αβγ∇_γ u_β) = S^α - W^αwhereS^α ≡ν^3 ∫ S (u^α + ℓ^α)d Ω , W^α ≡1/τ_a^ flν^3 ∫ (f-f^a) (u^α + ℓ^α) d Ω,where τ_a^ fl≡ D τ_a. Eq. <ref> indicates that the BGK subgrid model can be implemented simply by replacing S^α→ S^α - W^α from the original two-moment formalism. W^α can be expressed similar form as emission-absorption process of collision term, which can be written as,W^α = 1/τ_a^ fl( ( J - J^a )u^α + ( H^α - H^α a ) ).We, hence, need to determine τ_a, J^a, and H^α a to implement the BGK model. J^a and H^α a can be obtained by taking angular integrals of Eq. <ref>, and it looks that the process is straightforward. However, η depends on Ω in general, indicating that we need higher-rank angular moments to evaluate them. Below, we provide an approximate prescription to address this issue.We start with expanding the angular dependence of η by ℓ_α as,η = η_0 + η_1^αℓ_α + η_2^αβℓ_αℓ_β + ..... ,where the coefficients (η_i) do not depend on Ω. By using the expression, J^a and H^α a can be written as,J_e^a = J_x + η_0 ( J_e - J_x ) + η_1^α ( H_e α - H_x α )+ η_2^αβ ( L_e αβ - L_x αβ ) + ....H_e^α a = H_x^α + η_0 ( H_e^α - H_x^α ) + η_1^β ( L_e β^α - L_x β^α )+ η_2^βγ ( N_e βγ^α - N_x βγ^α ) + ....J_x^a =1/2(J_e + J_x) - η_0/2 ( J_e - J_x ) - η_1^α/2 ( H_e α - H_x α )- η_2^αβ/2 ( L_e αβ - L_x αβ ) + .... H_x^α a =1/2(H_e^α + H_x^α) - η_0/2 ( H_e^α - H_x^α ) - η_1^β/2 ( L_e β^α - L_x β^α )- η_2^βγ/2 ( N_e βγ^α - N_x βγ^α ) + ....This method guarantees that flavor-integrated angular moments are conserved regardless of η_i, even if we truncate their angular moments at any order. Eq. <ref> exhibits that the accuracy of determining J^a and H^α a hinges on how well we can determine coefficients η_i. In two-moment neutrino transport code, the maximum-entropy completion <cit.> (or a fitting method proposed in <cit.>, which can be used only for CCSNe, though) may be useful to obtain physically reasonable solutions. A noticeable feature in these methods is that we approximately reconstruct full angular distributions of neutrinos from their zeroth and first angular moments. This suggests that the angular dependence of η can also be determined by a similar way as multi-angle neutrino transport (see in Sec. <ref> for more details).Neglecting energy-dependence and anisotropic components in η, i.e., η(ν,Ω) = η_0, corresponds to the simplest case, but it would be a reasonable approximation for CFI. Since the CFI becomes important in regions where neutrinos and matters are tightly coupled, neutrinos are nearly isotropic in momentum space <cit.>. We also note that the so-called isotropy-preserving branch in k=0 mode provides the maximum growth rate of the instability <cit.>, lending confidence to diminishing angular dependence in η. Regarding the energy dependence, on the other hand, the authors in <cit.> found that the growth rate of CFI can be well approximated by the monochromatic energy treatment with averaged-energy collision rates. We also note that flavor swap is accompanied by resonance-like CFI, but the dynamics does not depend on neutrino energy <cit.>, suggesting that the energy dependence is not important in these cases. The condition, η(ν,Ω) = η_0, corresponds to the simplest case for our BGK model but it would be useful to explore qualitative trends for impacts of flavor conversions on CCSN and BNSM, as studied with phenomenological approaches. It should be emphasized that our subgrid model takes into account the relaxation-time scale, indicating that the interaction between neutrino advection, neutrino-matter interaction, and flavor conversions would be more appropriately handled than other phenomenological ones. It seems that η_0=1/3 and 0 are two interesting cases, which correspond to flavor equipartition and flavor swap, respectively. § ESTIMATION FOR Τ_AThe vigor of flavor conversion can not be measured only by f^a. Even if the asymptotic distribution is very different from the original non-mixing state, the flavor conversion can not be completed if the relaxation-time is very long. This exhibits that the determination of τ_a is also important task to increase the accuracy of our subgrid model. Linear stability analysis can offer the growth rate of flavor conversion, which would be the most accurate determination of τ_a. However, the growth rate can be obtained by solving the dispersion relation (see, e.g., <cit.>), which is a computationally expensive task. We also note that, in the stability analysis, full energy- and angular dependent information of neutrinos in momentum space are required in general, but they can be obtained only by solving multi-angle and multi-energy neutrino transport, indicating that these information are not available for approximate neutrino transport. We, hence, need alternative approaches for the estimation of τ_a to suit our need. We can utilize some approximate approaches of the stability analysis, that have been proposed in the literature. For FFC, a simple formula was provided in <cit.>. In this method, we can approximately estimate τ_a as,τ_a ∼ 2 π| (∫_G_v>0 dΓ G_v)(∫_G_v<0 dΓ G_v) |^-1/2,whered Γ_v ≡1/4 π d(cosθ_ν) d ϕ_νG_v ≡1/2 π^2∫( (f_e - f̅_e) - (f_x - f̅_x) ) ε^2 d ε.In Sec. <ref>, we demonstrate neutrino transport simulations for FFC by using Eq. <ref>. It is also note-worthy that Eq. <ref> is applicable for two-moment method by using the maximum-entropy completion <cit.> or a fitting method <cit.>, since they can approximately retrieve f from the zeroth and first angular moments. It would also be useful to employ other methods as in <cit.>, which allows us to evaluate the growth rate of flavor conversions directly from low angular moments of neutrinos. For CFI, the growth rate can also be estimated analytically <cit.>, which is also useful for our subgrid model. We can select them depending on the problem and the purpose of study. Another remark here is that machine-learning techniques potentially provide accurate estimations of η and τ_a without significant computational burden (see, e.g., <cit.>) § COMPARING TO OTHER PHENOMENOLOGICAL MODELSIt would be worthwhile to highlight differences of our sub-grid model from other phenomenological methods implemented in some neutrino-radiation-hydrodynamic codes. The study by <cit.> corresponds to a pioneer work for BNSM simulations with a phenomenological model of FFC, in which effects of FFC are incorporated by a parametric prescription. In their method, occurrences of FFC are identified based on k=0 mode stability analysis. They shuffle neutrinos between ν_e, ν_μ, and ν_τ to be flavor equipartition, if the time scale of flavor conversion is shorter than the critical one (which was assumed to be 10^-7s). This indicates that their prescription of flavor conversion can be reproduced in our sub-grid model by setting η=1/3 and τ_a → 0, if the growth time scale is shorter than 10^-7s (otherwise τ_a is set to be infinity).In <cit.>, they also carried out BNSM simulations by a similar approach as <cit.>, but they study impacts of FFCs on BNSM dynamics by considering three types of neutrino mixing schemes. Essentially, the degree of neutrino mixing varies among schemes, while the detection criterion for occurrences of FFC is common, in which they determine FFCs only by energy-averaged flux factor of ν̅_e. They also assumed that flavor conversions occur instantaneously (i.e., τ_a → 0 in our BGK subgrid model). This approach can also be reproduced by our subgrid model. The similar study for FFCs in BNSM has also been made by <cit.>. Different from <cit.>, they employed a so-called leakage scheme for neutrino transport. In their method, the neutrino transport scheme is left as the original, but they changed the estimation of neutrino luminosity by taking into account FFCs, which corresponds to a key ingredient in their scheme to give a feedback of neutrinos to fluid dynamics and ejecta compositions. They determine asymptotic neutrino luminosities by varying parameters (including cases with flavor equipartition), while they also employ neutrino opacities to determine the degree of mixing. In their approach, flavor conversions are suppressed in optically thick region, whereas they occur in optically thin one. Since this phenomenological model is developed based on a different philosophy from ours, our subgrid model can not reproduce their model. Nevertheless, it is interesting to compare our subgrid model to their phenomenological model in CCSN and BNSM simulations. Impacts of FFC on CCSN dynamics have also been studied by another phenomenological approach in <cit.>. In their method, the number of independent neutrino flavors are three: ν_e, ν̅_e, and ν_x, while they shuffle them so as to guarantee the neutrino lepton number conversion. They employ matter density to determine occurrences of FFC, in which there is a threshold density that flavor conversions occur. In the region where the matter density is lower than the threshold, they assume that neutrino flavor conversions occur instantaneously. They also assume that neutrinos are in flavor equilibrium, but ν_x and ν̅_x are assumed to be identical after the conversion is completed. As such, this phenomenological model is developed based on a very different approach from our subgrid one. One of the interesting applications for our subgrid model is to assess the capability of each phenomenological model. The assessment has been impossible thus far by direct numerical simulations of quantum kinetic neutrino transport due to extremely high computational cost, but it is feasible by using our subgrid model. This study would also help us to improve each phenomenological model. § COMPARING TO MISCIDYNAMICSThe coarse-grained subgrid model is compatible with the proposal to approximate neutrino quantum kinetics using neutrino quantum thermodynamics <cit.>. Taking τ_a → 0 in Eq. <ref> results inp^μ∂ f^a/∂ x^μ + dp^i/dτ∂ f^a/∂ p^i = - p^μ u_μ S^a,where S^a is S evaluated using f = f^a. This equation is equivalent to the miscidynamic transport equation written down in Ref. <cit.> if f^a is equated to ρ^eq in that paper.Miscidynamics refers to coarse-grained neutrino transport based on the concept of local mixing equilibrium. Our subgrid model does not necessarily assume that f^a is an equilibrium state in a thermodynamic sense. If we do assume this, however, then taking the limit of short relaxation-time τ_a is a means of imposing local mixing equilibrium. The thermodynamic input then enters through the determination of f^a.If τ_a → 0, neutrino flavor instantaneously equilibrates, and therefore it should never depart from equilibrium in the first place. This is the idea behind the adiabatic proposal of Ref. <cit.>. Accepting this logic, it is then possible to determine f^a using the assumption of adiabaticity and the requirements of self-consistency. Adiabaticity relates f to the Hamiltonian, but the Hamiltonian is itself a function of f through neutrino–neutrino forward scattering, hence the need for self-consistency. In the more straightforward case of MSW flavor conversion without neutrino self-interactions, self-consistency is not required and f^a is simply determined by vacuum oscillations and neutrino–matter forward scattering.Finite equilibration rates entail some amount of entropy production. Formulating diabatic miscidynamics—in contrast with the adiabatic version described above—would require a consideration of how subgrid degrees of freedom in the neutrino flavor field respond to grid-level changes driven by the derivative and collisional terms in Eq. <ref>. Generally speaking, if the microscopic constituents respond very quickly, then the macroscopic system moves between equilibria with minimal entropy production. Equations supplementing miscidynamics with diabatic terms have not yet been worked out. In their absence, a relaxation-time τ_a is a simple and plausible approximation of diabaticity.One subtlety in using our BGK subgrid model for diabatic miscidynamics is that f^a changes under diabatic evolution. The system heats up, and mixing equilibrium is set by the system itself rather than an external environment. Because entropy production is a subgrid effect, f^a can change on a subgrid timescale, which threatens the use of coarse-graining. However, a simple approximation is to adoptf^a ⟶ f^a_∞, τ_a ⟶τ^∞_a,where f^a_∞ and τ^∞_a are the t →∞ equilibrium and relaxation-time. In this approximation, neutrino flavor relaxes directly toward the ultimate equilibrium state f^a_∞ rather than pursuing a time-evolving equilibrium that converges on f^a_∞ at late time. The form of Eq. <ref> is unchanged except for replacement of f^a and τ_a by the respective asymptotic quantities.In sum, the τ_a → 0 relaxation subgrid model can reproduce adiabatic miscidynamics. Miscidynamics can be systematically improved by calculating diabatic corrections from the statistical mechanics underlying neutrino quantum thermodynamics <cit.>. It appears that Eq. <ref> can likewise be systematically improved by adjusting f^a and τ_a to reflect these corrections. § DEMONSTRATIONIn this section, we discuss capabilities of our BGK subgrid model by carrying out local simulations of FFC in 1D. We select this problem because analytic schemes for determining asymptotic states of FFC have been proposed in the literature <cit.>, which can be used to compute f^a. After we describe essential information on numerical simulations, we describe explicitly how to determine f^a.§.§ Full quantum kinetic simulationsHere, we describe the problem under full quantum kinetic approach. Note that the results of these simulations will be used to assess simulations with BGK subgrid model; the detail will be given in Sec. <ref>. Quantum kinetic simulations in the present study are essentially the same as those performed in <cit.>, in which we demonstrated 1D local simulations of FFCs in a two-flavor framework. One noticeable difference from the previous study is that we solve QKE under a three-flavor framework. Assuming spherically symmetry and no collision terms, we solve the following QKE,∂f/∂ t + 1/r^2∂/∂ r ( r^2 cosθ_νf )- 1/r sinθ_ν∂/∂θ_ν ( sin^2 θ_νf) = - i[H,f],whereH = H_ vac + H_ mat + H_νν,In this expression, f (f̅) and H (H̅) denote the density matrix of neutrinos and the oscillation Hamiltonian for neutrinos (antineutrinos), respectively. Since we only focus on local simulations in this study, neutrino advection in θ_ν direction is basically negligible. Each term of neutrino Hamiltonian can be written as,H̅_ vac = H^*_ vac , H̅_ mat = - H^*_ mat ,H̅_νν = - H^*_νν.Similar as <cit.>, we ignore matter potential in Hamiltonian but their effects are effectively taken into account in vacuum potential (see below). The vacuum term is, on the other hand, included in our simulations, which has the following form,H_ vac = 1/2 ε U [ m^2_1 0 0; 0 m^2_2 0; 0 0 m^2_3 ]U^† ,where m_i^2 and U denote the neutrino squared mass for the mass eigenstate of i and Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, respectively. Neutrino flavor conversions depend on only the difference of each squared mass of neutrino, and we set them as Δ m_21^2 = 7.42 × 10^-5 ev^2 and Δ m_31^2 = 2.510 × 10^-3 ev^2, where Δ m_ij≡ m_i^2 - m_j^2 in this study. We effectively include effects of matter suppression of flavor conversion by settingthe neutrino mixing angles as 10^-6, which is much smaller than those constraint by experiments. It should be noted that the vacuum potential is necessary only for triggering flavor conversions, and it does not affect non-linear evolutions of FFCs. This is simply because the self-interaction potential is several orders of magnitudes higher than the vacuum one, which also guarantees that FFCs overwhelm slow modes. Throughout this test, we use a monochromatic assumption with the neutrino energy of 12 MeV. As an initial condition, we set ν_e and ν̅_e angular distributions as,f_ee = ⟨f_ee⟩( 1 + β_ee ( cosθ_ν - 0.5 )) cosθ_ν≥ 0,where ⟨f_ee⟩ corresponds to an angular-averaged ones. Similar as <cit.>, we put a dilute neutrino gas for incoming neutrinos (cosθ_ν≤ 0), which do not play any roles on FFC. We also assume that there are no ν_μ, ν_τ, and their antipartners in the initial distributions. Following <cit.>, ⟨ f_ee⟩ is chosen so that the number density of ν_e becomes 10^32 cm^-3. We determine ⟨f̅_ee⟩ via a new variable, α, which is defined as,α≡⟨f̅_ee⟩/⟨ f_ee⟩ (= n̅_ee/n_ee),where n_ee (n̅_ee) denotes the number density of ν_e (ν̅_e). In this demonstration, we study four cases by varying α and β̅_ee while we set β_ee=1 for all models. The reference model corresponds to the case with α=1 and β̅_ee=1. We add two models by varying α (α=0.9 and 1.1), while β̅_ee is the same as the reference one. We test another model with β̅_ee=0.1, while α is set to be the same as the reference model. It should be mentioned that the angular position for ELN crossing hinges on α, and that β̅_ee dictates the depth of crossing increases (see <cit.> for more details). In these simulations, we focus on a spatially narrow region with 50km≤ r ≤ 50km + 10m. The radial domain and angular (θ_ν) direction in neutrino momentum space are covered by N_r=49152 and N_θ_ν=128 uniform grid points, respectively. We employ a Dirichlet boundary condition for incoming neutrinos from at each boundary position, while the free boundary one is adopted for escaping neutrinos from the computational region. We run each simulation up to 10^-4 ms. §.§ Classical simulations with BGK subgrid model The corresponding equation with our BGK subgrid model to Eq. <ref> can be written as,∂f/∂ t + 1/r^2∂/∂ r ( r^2 cosθ_νf )- 1/r sinθ_ν∂/∂θ_ν ( sin^2 θ_νf) = - 1/τ_a ( f - f^a),while we assume that the off-diagonal terms are zero, implying that Eq. <ref> is equivalent to classical neutrino transport. For this simulation, we extend our GRQKNT code <cit.> by adding the BGK subgrid module. This exhibits that these numerical simulations for both full quantum kinetics and this classical Boltzmann transport with subgrid model have the same accuracy of neutrino advection. The initial- and boundary conditions are also the same as those in QKE simulations. In this demonstration, we employ Eq. <ref> to estimate τ_a. We note that τ_a is updated at every time step during the simulation.To determine f^a, we employ a method in <cit.>. This offers an approximate scheme to determine asymptotic states of FFC analytically. As we shall discuss later, however, this analytic method corresponds to the simplest prescription and there is room for improvements. In fact, the scheme is developed based on assumptions that the neutrinos flight directions are v>0 (or v<0) and there is a single ELN angular crossing. These assumptions are not appropriate in general, leading to a systematic error in realistic situations. Nevertheless, this scheme can capture the essential trends of FFCs (see below), which may provide sufficient accuracy as a subgrid model. In this method, we first compute the positive and negative ELN-XLN number densities,A ≡ | ∫_G_v<0 dΓ G_v|, B ≡ ∫_G_v>0 dΓ G_v,while G_v is given in Eq. <ref>. In cases with B>A (positive ELN-XLN density), we determine η in Eq. <ref> as,η = 1/3( G_v<0 )1 - 2A/3B( G_v≥ 0 ),meanwhile η in B<A (negative ELN-XLN density) is determined as,η = 1/3( G_v>0 )1 - 2B/3A( G_v≤ 0 ).In the case with B=A, η is set to be 1/3 for all v, indicating that f^a corresponds to the complete flavor equipartition (see also <cit.>). We also note that η̅ is equal to η, since we do not have to distinguish neutrinos and antineutrinos in FFC (see also <cit.>). Let us put an important remark here. As shown in <cit.>, the asymptotic state of FFCs obtained from quantum kinetic simulations depends on boundary conditions. In fact, the Dirichlet boundary condition (as used in this demonstration) results in qualitatively different asymptotic state from those obtained by periodic one. In the Dirichlet case, the asymptotic state is determined so as to preserve ELN- and XLN- number fluxes. In this demonstration, however, we determine η from the condition of number conservation (Eqs. <ref>-<ref>), despite employing the Dirichlet boundary condition. One may wonder if this is inconsistent treatment. As we shall demonstrate below, however, our choice is appropriate. We will provide this detailed discussion in Sec. <ref>. One of the advantages of subgrid model is that high resolutions are no longer necessary in these simulations, since there are no driving terms to create small scale structures in this coarse-grained model. For this reason, we employ N_r = 192 and N_θ_ν=16 grid points with the same domains as those used in QKE simulations. It should be mentioned, on the other hand, that τ_a is much smaller than the advection timescale (which is also associated with Courant-Friedrich-Levy condition for the stability of numerical simulations), implying that Eq. <ref> becomes a stiff equation. This requires an implicit time evolution to numerically stabilize in solving the equation. In this demonstration, an operator-splitting approach is adopted, in which we first evolve f by neutrino advection in time explicitly, and then the BGK term (right hand side of Eq. <ref>) is handled by an implicit way. More specifically, the distribution function of neutrinos at n+1 time step (f^n+1) is computed as,f^n+1 = (1/Δ t + 1/τ_a)^-1( f^*/Δ t + f^a/τ_a),where Δ t denotes the time step. In this expression, f^* corresponds to a tentative distribution function which is obtained by f evolved only by advection terms in Eq. <ref>. We confirm that this operator-splitting method works well to evolve the system in a numerically stable manner. §.§ Results In Fig. <ref>, we show the color map of survival probability of ν_e as functions of r and cosθ_ν. From left to right, results with three different time snapshots are displayed (T = 10^-5, 5 × 10^-5, and 10^-4ms, respectively). Top and bottom panels distinguish quantum kinetic model and classical one with BGK subgrid model. Since antineutrinos have essentially the same properties as those in neutrinos, we omit to show them. As shown in the top left panel of Fig. <ref>, neutrino flavor conversions vividly occur and reach nearly flavor equipartition in the almost entire neutrino flight directions at T=10^-5ms. This is consistent with the previous studies <cit.> that FFC makes the system evolve toward the flavor equipartition in the case with n_ν_e = n_ν̅_e. As we discussed in <cit.>, however, the flavor equipartition is not the actual asymptotic state in cases with Dirichlet boundary condition. In fact, angular distributions of survival probability of ν_e become remarkably different around the boundary at R-R_ in=0; indeed, FFC tends to be less vigorous in cosθ_ν≳ 0.5. The region expands with time, and eventually it dominates the entire computational domain (see the top middle and right panels in Fig. <ref>). We will discuss the physical mechanism of the transition in detail later, which is associated with the determination of f^a from f in BGK subgrid model.As shown in the bottom panels of Fig. <ref>, the corresponding classical simulation with BGK subgrid model can reproduce qualitatively similar results as those found in the quantum kinetic simulation. In the earlier phase, FFC occurs in the entire angular regions except for the vicinity of R-R_ in=0, but the flavor conversion in cosθ_ν≳ 0.5 subsides after neutrinos injected (constant in time) at R-R_ in=0 reach there. In Fig. <ref>, we compare the radial profiles of the angular-averaged survival probability of ν_e between the two simulations, and we confirm that the errors are within ∼ 20 %. This comparison lends confidence to the capability of BGK subgrid model. Although the overall properties can be well captured by the BGK subgrid model, there are quantitative deviations, the origins of which are worth to be discussed. In the early phase, the growth of flavor conversion is slightly faster for the classical simulation with BGK subgrid model. This error comes from the empirical determination of τ_a by Eq. <ref>, which does not have the ability to determine the growth rate of FFC quantitatively. We also find that some detailed angular-dependent features are not captured by the subgrid model. In quantum kinetic simulations, flavor conversions vividly occur in the region of 0 ≤cosθ_ν≲ 0.6, but the angular region is slightly narrower for the subgrid model (0 ≤cosθ_ν≲ 0.5). This is mainly due to the accuracy of determination of η in our subgrid model. As described in Eqs. <ref> and <ref>, the angular distribution of η is discrete at G_v=0 in our approximate scheme, but it is continuous in real. Regarding this issue, one can reduce the error if we employ smooth functions to determine angular distributions of η, although the numerical cost may become more expensive. We note that such approximate schemes have been recently proposed by <cit.>, and they showed that the quadratic functions can reduce the error by 30 to 50 % from our box-like treatment. In Figs. <ref> and <ref>, we show the same plots as in Fig. <ref> but for different models. These figures exhibit that the BGK subgrid model works well for all cases. One may think that the error around the boundary of R-R_ in=0 in the model with β̅_ee=0.1 is higher than other models. However, this error is also due to the low accuracy of determining τ_a, it can be improved if we employ better methods to determine it, for instance, based on linear stability analysis. In Fig. <ref>, we compare the angular-averaged survival probabilities of ν_e at the end of our simulations among different models. For all models, we confirm that the error is within ∼ 20 % for the asymptotic distribution of neutrinos. Finally, we describe the reason why our BGK subgrid model with a prescription of Eqs. <ref>-<ref> works well, despite the fact that the flavor conversions in cases with Dirichlet boundary are qualitatively different from the periodic one. We start with discussing the mechanism of transition of asymptotic states from periodic case to Dirichlet one. As shown above, we observed at least temporarily in the early non-linear FFC phases that the asymptotic states determined based on the number conservation (i.e., periodic boundary case) appear in almost entire spatial region. This is because the dynamics of flavor conversions is almost identical in adjacent spatial regions, which offers the similar environment as a periodic boundary condition. As a result, the neutrino flux is also constant in adjacent spatial positions, guaranteeing the ELN- and XLN number conservation at each spatial position. On the other hand, both ELN and XLN number fluxes (or first angular moments) in this (temporal) asymptotic state become different from those in initial conditions, whereas they are fixed in time at the boundary of R=R_ in due to Dirichlet condition. This is a crucial problem for asymptotic states, since the number flux needs to be balanced to achieve the steady state (see Eq. 6 in <cit.>). This implies that the neutrino distributions in the periodic boundary condition does not satisfy the actual asymptotic state. This also exhibits that ELN- and XLN number fluxes at R>R_ in is different from R=R_in, resulting in evolving ELN- and XLN- number densities (or zeroth angular moments) at each spatial position. One thing we do notice along this discussion is that the classical simulation with BGK subgrid model has the capability to handle the effects of neutrino advection precisely, since the advection term is the same as that in quantum kinetic one. This indicates that the dynamical evolution of ELN and XLN number densities at all spatial positions are well modeled. This also suggests that the neutrino radiation field obtained in the subgrid model evolves in time so that neutrino fluxes become constant in space to achieve the steady state, while this results in the dynamical change of ELN and XLN number densities. In BGK subgrid model, we determine f^a by the time- and spatial dependent f to satisfy ELN and XLN number density at each position, which leads eventually to the consistent asymptotic state determined from the conservation of ELN- and XLN number flux. This corresponds to the asymptotic state with Dirichlet boundary condition. The above argument exhibits that the local study of flavor conversions with periodic boundary conditions is worthy to improve the BGK subgrid model. As demonstrated in <cit.>, global advection of neutrinos affects the dynamics of flavor conversion significantly, and that the final outcome of neutrino radiation fields are qualitatively different from those estimated from local simulations. The present study suggests, however, that the effects of global advection can be decoupled from local dynamics of flavor conversion under the framework of our BGK subgrid model. This suggests that the classical BGK model has the capability of modeling global quantum kinetics of neutrinos in CCSN and BNSM environments by precise determination of f^a and τ_a based on local study of flavor conversions.§ SUMMARYIn this paper, we present a new subgrid model for neutrino quantum kinetics, in particularfor neutrino flavor conversion. The basic assumption in this subgrid model is to handle the dynamics of flavor conversions as a relaxation process, in which the flavor conversion makes the system to asymptotic states (f^a) in the time scale of τ_a. This treatment is essentially the same as a BGK relaxation-time approximation <cit.>, which was originally developed to approximately handle collisional processes in gas dynamics. In our model, we do not apply the approximation to collision term but neutrino oscillation term. We describe the QKE with the BGK model in Sec. <ref>, and also provide an explicit form for two-moment method in Sec. <ref>. We also present a concrete example of how we can use the BGK model in classical neutrino transport by focusing on FFC (Sec. <ref>). We assess the capability of the BGK subgrid model by comparing to the results of quantum kinetic neutrino transport, and show that the subgrid model has the ability to capture the overall features in dynamics of neutrino flavor conversions.Although our subgrid model is a valuable tool with many potentials, more work is certainly needed to increase the accuracy. It should be pointed out that the present study also provides a strategy to improve the subgrid model. As shown in Eq. <ref>, accurate determination of η (and η̅) from f is crucial and any approaches including analytic schemes <cit.> and AI <cit.> are applicable. We note that the prescription that used in the present demonstration (see Eqs. <ref> to <ref>) is just an example for FFC, but we certainly need others for different types of flavor conversions. In fact, the analytic scheme with Eqs. <ref> to <ref> can not handle a flavor swap phenomena recently found in FFC simulations of BNSM environments <cit.>. As such, we still need to improve approximate schemes to determine asymptotic states of FFCs.We are also interested in how well the BGK subgrid model can work in cases that flavor conversions and collision processes (neutrino emission, absorption, and scatterings) are interacted to each other. As demonstrated in <cit.>, the asymptotic states of flavor conversion depends on neutrino-matter interactions. The detailed study is necessary to assess the capability of our subgrid model in such complicated systems. The detailed study is postponed to future work.Although there is certainly room for improvements, the BGK subgrid model is very useful and easy to be implemented into currently existing CCSN and BNSM codes. This indicates that the global neutrino-radiation-hydrodynamic simulations withrespectable physical fidelity of flavor conversions become feasible. We hope that the BGK subgrid model contributes to the entire CCSN and BNSM community to accommodate effects of neutrino quantum kinetics into their simulations.§ ACKNOWLEDGMENTSWe are grateful to David Radice for useful discussions. The numerical simulations are carried out by using "Fugaku" and the high-performance computing resources of "Flow" at Nagoya University ICTS through the HPCI System Research Project (Project ID: 220173, 220047, 220223, 230033, 230204, 230270), XC50 of CfCA at the National Astronomical Observatory of Japan (NAOJ), and Yukawa-21 at Yukawa Institute for Theoretical Physics of Kyoto University. For providing high performance computing resources, Computing Research Center, KEK, and JLDG on SINET of NII are acknowledged. This work is also supported by High Energy Accelerator Research Organization (KEK). HN is supported by Grant-inAid for Scientific Research (23K03468) and also by the NINS International Research Exchange Support Program. LJ is supported by a Feynman Fellowship through LANL LDRD project number 20230788PRD1. MZ is supported by a JSPS Grant-in-Aid for JSPS Fellows (No. 22KJ2906) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) in Japan. | http://arxiv.org/abs/2312.16285v1 | {
"authors": [
"Hiroki Nagakura",
"Lucas Johns",
"Masamichi Zaizen"
],
"categories": [
"astro-ph.HE",
"gr-qc",
"hep-ph",
"physics.comp-ph"
],
"primary_category": "astro-ph.HE",
"published": "20231226190002",
"title": "BGK subgrid model for neutrino quantum kinetics"
} |
AIP/123-QED Email: [email protected] National Accelerator Laboratory, Menlo Park, CA 94025CA 94305 SLAC National Accelerator Laboratory, Menlo Park, CA 94025CA 94305 SLAC National Accelerator Laboratory, Menlo Park, CA 94025 SLAC National Accelerator Laboratory, Menlo Park, CA 94025 CA 94305 SLAC National Accelerator Laboratory, Menlo Park, CA 94025CA 94305 SLAC National Accelerator Laboratory, Menlo Park, CA 94025These authors co-supervised the workSLAC National Accelerator Laboratory, Menlo Park, CA 94025These authors co-supervised the work CA 90089 These authors co-supervised the workSLAC National Accelerator Laboratory, Menlo Park, CA 94025. Rapid discovery and synthesis of new materials requires intelligent data acquisition strategies to navigate large design spaces. A popular strategy is Bayesian optimization, which aims to find candidates that maximize material properties; however, materials design often requires finding specific subsets of the design space which meet more complex or specialized goals. We present a framework that captures experimental goals through straightforward user-defined filtering algorithms. These algorithms are automatically translated into one of three intelligent, parameter-free, sequential data acquisition strategies (SwitchBAX, InfoBAX, and MeanBAX). Our framework is tailored for typical discrete search spaces involving multiple measured physical properties and short time-horizon decision making. We evaluate this approach on datasets for TiO_2 nanoparticle synthesis and magnetic materials characterization, and show that our methods are significantly more efficient than state-of-the-art approaches. Targeted materials discovery usingBayesian algorithm executionDaniel Ratner ===================================================================§ INTRODUCTIONModern materials discovery involves searching large regions of multi-dimensional processing or synthesis conditions to find candidate materials that achieve specific desired properties. For example, the lithium-ion batteries that have enabled both the personal electronics and clean mobility revolutions started out using simple LiCoO_2 as the cathode active material, but this has given way to numerous formulations of the form Li(Ni_1/3+xCo_1/3-2xMn_1/3+x)O_2 where each metal contributes to various aspects of stability and electrochemistry <cit.>. Another example is in the development of high temperature superconducting materials where the trade-off between different quantum phenomena (e.g., charge density waves and the superconducting state) needs to be addressed via iterative synthesis and characterization of tailored materials <cit.>. Often, the rate of discovery is naturally limited by the speed at which experiments can be performed; this is particularly true for materials applications involving low levels of automation, complex multi-step synthesis protocols, and slow/expensive characterization modalities. For these important situations, developing algorithms which can quickly identify desirable conditions under limited experimental budgets is critical to furthering materials discovery <cit.>. Intelligent sequential experimental design has emerged as a promising approach to rapidly search large design spaces. Compared to classical techniques such as factorial design of experiments, sequential methods use new data collected at each step to reduce the total number of experiments needed to find optimal designs <cit.>. Current methods typically involve two components: 1) a probabilistic statistical model trained to predict both the value and the uncertainty of a measurable property at any point in the design space (here, defined as a discrete set of all possible measurement or synthesis conditions) and 2) an `acquisition function' which assigns a relative numerical score to each point in the design space. Under this paradigm, measurements are made at the design point which has the highest acquisition value. No matter the accuracy of the model, intelligent data acquisition strategies will be limited by the relevance of the acquisition function, i.e. how closely the acquisition function aligns with the user's experimental goal. In this work, we focus on the problem of automatically creating custom acquisition functions to target specific experimental goals. This is an important problem, as materials applications often involve precise requirements that are not well addressed by existing sequential design of experiment techniques. Specifically, we will consider the task of finding the `target subset' of the design space that satisfies user-defined criteria on measured properties. An example of a custom experimental goal, the corresponding target subset of the design space and data acquisition scheme is shown in Figure <ref>.Most prior work in adaptive decision making has focused on the goal of single objective optimization: finding the design point corresponding to the global optimum for a property of interest <cit.>. An example of this type of goal is the task of developing novel electrolyte formulations with the largest electrochemical windows of stability <cit.>. For single objective optimization, the framework of Bayesian optimization (BO) applies, and has a variety of relevant acquisition functions including Upper Confidence Bound (UCB), Probability of Improvement (PI), and Expected Improvement (EI) <cit.>. For multi-property optimization, typically there does not exist a single design condition that is optimal with respect to all properties. Instead, the goal is to obtain the set of design points which optimally trade-off between objectives (Pareto optimal designs) <cit.>. Common multi-objective Bayesian optimization acquisition functions include Expected Hypervolume Improvement (EHVI) <cit.>, Noisy Hypervolume Improvement (NEHVI) <cit.>, and ParEGO <cit.>. Single and multi-objective Bayesian optimization have been applied in a number of materials settings <cit.>. For further details on materials-focused Bayesian optimization,see <cit.>.Another well-studied experimental goal is mapping (full-function estimation). Instead of finding global optima, the task is instead to learn the relationship between the design space and the property space. Uncertainty Sampling (US) is a typical acquisition function for this purpose. Such strategies have been used to achieve higher image resolutions in shorter collection times and have found application in fields such as X-ray scattering <cit.> and microscopy <cit.>. Generally, mapping tasks are useful in helping elucidate insights about the entire system but come with the downside of needing to perform a large number of (potentially slow) experiments across the entire design space. The primary subject of this manuscript addresses the larger goal of finding specific target regions of the design space which conform to specific conditions on the properties, which subsumes the aforementioned goals of optimization and full-function estimation, as well as other more complex tasks including level-set estimation <cit.>. In this more general setting, the goal is to isolate the set of specific design points which achieve precise user-specified property criteria. For subsets that do not involve optimization or mapping, either custom acquisition functions need to be developed or users are forced to use existing acquisition functions which are not necessarily aligned (and thus inefficient) for their specific experimental task. Developing new acquisition functions is possible <cit.>, but this often requires significant time and mathematical insight. These limitations restrict accessibility to the broader materials community and hinders the pace of materials innovation.Various important scientific problems fall into the category of subset estimation, including: determining synthesis conditions targeting varying ranges of monodisperse colloidal nanoparticle sizes for heterogeneous catalysis <cit.> or plasmonics <cit.>, enumerating processing conditions corresponding to wide stability windows <cit.>, accurately mapping specific portions of phase boundaries <cit.>, charting transition state pathways between distant structural minima in a potential energy landscape <cit.>, and finding chemically diverse sets of ligands that are strong, non-toxic binders <cit.>. The ability to obtain sets of design points which meet user-specifications is particularly important from a practical adoption standpoint. Many novel materials do not achieve widespread industrial application due to long-term failure modes. Common failure modes include degradation mechanisms in batteries <cit.>, catalysts <cit.>, and solar cells <cit.>, and toxicity of various bio-compatible materialsand medical therapeutics <cit.>. Obtaining a large pool of plausible designs can mitigate against the risk of long term failure, improving the odds of discovering transformative materials. It is worth mentioning that these problems involve identifying a larger set of design points than optimization (which is typically only a few design points) and a substantially smaller region than full-function estimation (the entire domain). Note, while multi-objective optimization does aim to locate a set of design points, this procedure only returns a specific set called the Pareto front, corresponding to the optimal trade-off between measured properties.In this manuscript, we present a framework for building acquisition functions that can precisely target a subset of the design space corresponding to an experimental goal. The user defines their goal via an algorithmic procedure that would return the correct subset of the design space if the underlying mapping were known. This algorithm undergoes an automatic conversion into an acquisition function that can guide future experimentation, bypassing the need to devise complex acquisition functions for specific applications.Our work presents both methodological development and showcases first application to the domain of materials research. Specifically, we adapt information-based Bayesian Algorithm Execution (InfoBAX) <cit.> and Multipoint-BAX <cit.> to handle materials science scenarios, characterized by discrete design spaces and multi-property measurements. Second, we develop a multi-property generalization of an exploration strategy that uses model posteriors <cit.>, which we call MeanBAX. We observe that MeanBAX and InfoBAX exhibit complementary performance in the small-data and medium-data regimes, respectively. For this reason, we additionally design a parameter-free strategy, named SwitchBAX, which is able to dynamically switch between InfoBAX and MeanBAX, that performs well across the full dataset size range. For all three approaches, we provide scientists with a simple https://github.com/src47/multibax-sklearnopen-source interface to cleanly and simply express complex experimental goals, implement a variety of custom user-defined algorithms tailored to materials estimation problems, and significantly, evaluate the suitability of the BAX framework to guide practical materials experiments. We highlight the applicability of the multi-property BAX strategies by targeting a series of user-defined regions in two datasets from the domains of nanomaterials synthesis and high-throughput magnetic materials characterization. We anticipate that this method will enable the ability to target important non-trivial experimental goals, paving the way for the accelerated design of advanced materials.§ SEQUENTIAL EXPERIMENTATION DESIGN APPROACH§.§ Expressing an Experimental Goal via Algorithm ExecutionWe first consider a design space: a discrete set of N possible synthesis or measurement conditions, each with dimensionality d corresponding to different changeable parameters. Here, X ∈ℝ^N × d is the discrete design space and x ∈ℝ^d is a single point in the design space with d features. For each design point, it is possible to perform a costly or time-consuming experiment to obtain a set of m measured properties (y ∈ℝ^m). The total set of measured properties (measured property space) across the entire design space is denoted Y ∈ℝ^N × m. The design space (X) and corresponding measurement space (Y) are linked through some true noiseless underlying function, f_* which is assumed to be unknown (or black-box) prior to any experimentation (Equation <ref>). Real measurements have an additional term, ϵ, corresponding to `measurement noise', which we assume can be modeled by independent and identically distributed normal random variables:y = f_*(x) + ϵ,ϵ∼𝒩(0, σ^2).Within the full design space, there are often specific portions which are particularly desirable to measure. For the purposes of this manuscript, achieving a custom experimental goal is equivalent to finding a specific ground-truth target subset of the design space. We define the ground-truth target subset as 𝒯_* = {𝒯_*^x, f_*(𝒯_*^x) }. Here, 𝒯^x_* corresponds to the design points which achieve the experimental goal (𝒯^x_* ⊆ X) and f_*(𝒯_*^x) corresponds to the corresponding underlying property values. Note, in this framework, the experimental goal dictates the underlying target subset. As an example, one specific goal is that of single-property optimization. Here, 𝒯^x_* would refer to the point (or degenerate sets of points) in the design space with the optimal property value; this is the setting of classical Bayesian optimization. However, the subset can also be more complex. In Figure <ref>A, we consider a simple experiment of a single property and single design feature (one-dimensional Y and X). In this scenario, the experimental goal is to find the set of points in the design space for which the material property falls within a band between two specified property value thresholds. Having defined the ground-truth target subset, we now turn to the concept of defining this subset via an algorithm. First, let us assume that f_* is known throughout the design space. If this were the case, we could execute an algorithmic procedure () to obtain the region, 𝒯_*, of interest as𝒜(f_*, X) →𝒯_* .Figure <ref>A shows the correspondence between the experimental goal and target subset using the Level Band algorithm. Here the algorithm 𝒜 simply scans through every point in the design space and returns the subset for which the property value falls within the level band. Of course, the catch is that clearly the underlying mapping, f_*, is unknown. However, it turns out that framing an experimental goal as an algorithm that would correctly yield the ground-truth target subset if the true mapping were known is a powerful concept. It allows the user to precisely state their desired outcome and in the next section, we will see how to sequentially acquire data to estimate the result of the algorithm running on f_*. §.§ Obtaining a Target Subset Using BAX Bayesian algorithm execution (BAX) is the idea that one may instead execute an algorithm on approximate fitting models (surrogate models) that are designed to mimic the true function but are trained on measurements at only a small number design points. Unlike f_*, these models are fast and inexpensive to evaluate. Data is then acquired in a sequential manner to help estimate the true algorithm output (i.e. the result if the algorithm were to be executed on the true, unknown function). We denote 𝒟_t = {x_k, y_k}_k=1^t as the measured dataset at iteration t; 𝒟^x_t ∈ℝ^t × d and 𝒟^y_t ∈ℝ^t × m refer to the x and y components of the dataset. The surrogate models are `probabilistic models' which predict both an average response and an uncertainty estimate for every point in the design space. Specifically, in this work, we use a machine learning model known as a Gaussian process (). In the case that multiple properties are measured for a given point in the design space, multiple independent single-propertymodels are used. Ais best conceptualized as a probability distribution over plausible functions. In the absence of data, we can define a prior distribution over functions (Equation <ref>). The mean of the prior distribution is assigned the value 0 everywhere in the domain and the prior covariance function is denoted 𝐊 and is derived from the squared exponential kernel (See Methods):p(f) = 𝒢𝒫(f ; 0, 𝐊). As data are collected, the prior distribution is updated to the posterior distribution, p(f | 𝒟_t). The mean and marginal standard deviation of this new distribution are termed the posterior mean function () and posterior standard deviation function (f^σ), respectively. An example of the posterior mean function (shown in red) and posterior standard deviation function (shown as blue band) is shown in Figure <ref>B based on a training dataset of five design points and corresponding measured properties. Note, that themodel estimates low uncertainties near measured points.It is possible to sample from ato yield a series of statistically consistent plausible fitting functions (termed posterior function samples),(Equation <ref>), and displayed as blue curves in Figure <ref>B. We denote these asf_i ∼ p(f | 𝒟_t) . Given a trained 𝒢𝒫 model, an algorithm (such as the Level Band algorithm) can be executed on the posterior mean function or on a posterior function sample (Equation <ref> and Figure <ref>C):𝒜(f̅, X) →𝒯̅, 𝒜(f_i, X) →𝒯_i. Note that the algorithm execution step is both fast and inexpensive, as it does not require any additional measurements to be performed. The subsets returned by the algorithm are two different types of predictions of the identity of the ground-truth (and unknown) target subset 𝒯_*.The information obtained from the execution of the algorithms can be used to build a guiding function, termed the acquisition function. Each point in the design space is assigned an acquisition value quantifying the relative importance for subsequent measurement. The next design point to measure is the one with the highest acquisition value (Figure <ref>D). Specific acquisition functions for the task of BAX are described in the next section. Overall, the data acquisition pipeline follows these steps: * Construct an algorithm, 𝒜(f, X), corresponding to a stated experimental goal.* Approximate f_* using a set of independent 𝒢𝒫 models trained on limited measurements.* Execute the algorithm on either the 𝒢𝒫 posterior mean, , or its posterior samples, , over the full design space (Equation <ref>). This yields a set of design points that are predicted to conform to the experimental goal. * Use the algorithm outputs to build a goal-aware acquisition function. * Perform an experiment on the design point with the highest acquisition function and repeat from step 2.§.§ Multi-property BAX Acquisition Functions We present three acquisition functions for the task of BAX: MeanBAX (based on the posterior mean function), InfoBAX (based on the posterior function samples) and SwitchBAX, which dynamically combines the two. In MeanBAX, the user-algorithm, , is executed on the posterior mean of themodel. Here, the output of the algorithm corresponds to the set of points in the design space that are predicted to satisfy the experimental goal. For MeanBAX, the acquisition function is equal to the average (across the different measured properties) output marginal standard deviation of themodels for points in the design space that are predicted to be part of the target subset (Equation <ref>). The acquisition function is zero for all other points in the design space (Figure <ref>D, bottom panel). Similar single-property variants of this acquisition function have been proposed in other works for specialized applications <cit.>. For the MeanBAX algorithm as presented above, two situations often occur that lead to pathological sampling behavior. The first is when no design points are predicted to be in the target subset (i.e. when T̅^x = ∅); under this condition, the acquisition function is zero across the entire domain. In the second case, the predicted target set may have already been collected (i.e. when T̅^x ⊆𝒟^x_t); under this condition, the algorithm is forced to repeat queries. Therefore, if either condition is met, we use a default strategy of 1/m∑^m_j=1 f^σ(x)_j across the entire domain (i.e., Uncertainty Sampling). We therefore defineMeanBAX(x) = 1/m∑^m_j=1 f^σ(x)_jifx ∈𝒯̅^x 0else For the InfoBAX strategy,the user-algorithm is executed on a series ofposterior function samples, each yielding a different set of predicted target points (Figure <ref>D, top panel). Since the algorithm output for each posterior function sample may yield a different number of design points, this is not a trivial extension of MeanBAX and requires combining the outputs in a statistically reasonable manner. The InfoBAX acquisition function is defined asInfoBAX(x) = 1/m∑_j^m( 𝐇[p(y_j | x, 𝒟_t)] - 1/n∑_i^n 𝐇[ p(y_j | x, 𝒟_t ∪𝒯_i) ] ) The first term in the InfoBAX acquisition function (Equation <ref>) is the entropy (spread) of the posterior predictive distribution. The posterior predictive distribution, p(y | 𝒟_t), is closely related to the posterior distribution, p(f | 𝒟_t), and includes the effect of measurement noise: p(y | 𝒟_t) = ∫ p(f | 𝒟_t) p(y | f) dy. This term essentially performs Uncertainty Sampling and aims to suggest the design point with highest predicted average uncertainty. The second term captures the experimental goal through the output of a user-algorithm. For each of the n posterior function samples, a newmodel is trained with the measured dataset plus a predicted dataset corresponding to algorithm execution (i.e. 𝒟_t ∪𝒯_i). Importantly, the predicted datasets and corresponding updatedmodels are only used to calculate the acquisition function and are discarded after selecting the next real design point to measure. The entropy of each model is calculated across the design space and then averaged over the number of posterior samples. Finally, to account for the multi-property case, an average is taken over the m properties. Conceptually, InfoBAX relates variance in theposterior samples to variance in the algorithm outputs; the acquisition function selects points in the design space where both the models are uncertain AND where that uncertainty influences the algorithm output. For further details, refer to Neiswanger et al. (2021) <cit.>. Finally, the SwitchBAX strategy is a modification to the MeanBAX strategy which changes the default behavior to InfoBAX rather than US under the condition that either 1) no points are predicted to be in the target subset or 2) all predicted points have already been measured. Based on these conditions, the method dynamically switches between InfoBAX and MeanBAX to guide decision making. We refer to the BAX strategies (InfoBAX, MeanBAX, and SwitchBAX) as `goal-aware' because the acquisition function incorporates the user goal directly via algorithm specification. We can compare these approaches to typical acquisition functions for searching a multi-property design space: Random Sampling (RS), Uncertainty Sampling (US), and Expected Hypervolume Improvement (EHVI). RS selects a design point uniformly at random (here, without replacement) from the discrete design space at each iteration. For US, the acquisition function is simply the predicted average standard deviation of themodels, 1/m∑^m_j=1 f^σ(x)_j. Intuitively, this corresponds to making measurements where the model is most uncertain about the average value of the measured properties. These two acquisition functions are often used for mapping, in which the goal is to estimate the value of the measured properties over the full design space. EHVI is a specialized multi-objective Bayesian optimization acquisition function that is designed for the specific goal of Pareto front estimation. The utility of our multi-property BAX framework is that acquisition functions can be aligned to arbitrarily complex questions about an experimental system. As long as an algorithm that could be executed on the true function can be written, the BAX strategies circumvent the lack of knowledge of the true function by running the algorithm on function samples or the mean of a surrogate model (Figure <ref>).§.§ Metrics for Sequential Experimental Design To assess the performance of the various adaptive sampling strategies (RS, US, EHVI, MeanBAX, InfoBAX and SwitchBAX), we introduce two metrics:and .quantifies the number of measured data points that achieve the experimental goal (Equation <ref> and Figure <ref>A), defined as𝖭𝗎𝗆𝖻𝖾𝗋𝖮𝖻𝗍𝖺𝗂𝗇𝖾𝖽 = |𝒟^x_t ∩𝒯^x_*|. Thequantifies how accurately themodel knows the ground-truth target subset. To compute this metric, we execute the user-specified algorithm onto obtain the set of points that are predicted to be in the target subset. This set of points can be compared with the set of points that are actually in the target subset (using the true function). Here, we use the(intersection over union), a metric between 0 and 1 which quantifies the degree of set overlap, to compare sets (Equation <ref> and Figure <ref>B). We define this as𝖯𝗈𝗌𝗍𝖾𝗋𝗂𝗈𝗋𝖩𝖺𝖼𝖼𝖺𝗋𝖽𝖨𝗇𝖽𝖾𝗑 = |𝒯^x_* ∩𝒯̅^x|/|𝒯^x_* ∪𝒯̅^x|. Note that computing theassumes that the true target subset of the design space is already known. This information is unknown during a real experiment, and therefore theis most useful in a benchmarking context to judge the performance of different acquisition schemes on previously collected data. To aid in metric evaluation, we also present upper bounds for the two metrics as a function of the amount of data collected. Under optimal sampling, at each iteration.increases by one until all the target subset points have been measured. Theupper bound, in contrast, is 1.0 for each iteration. This would correspond to the exceeding unlikely scenario in which a model initialized with no data perfectly predicts which design points are in the target subset.§ RESULTS We use two datasets from the fields of nanoparticle synthesis and magnetic materials to benchmark the performance of various acquisition functions (RS, US, EHVI, InfoBAX, MeanBAX, and SwitchBAX) for the task of targeted subset estimation. The following subsections describe three user-defined algorithms (denoted Library, Multiband and Wishlist Algorithms) relevant to materials application. In Supplementary Section <ref> and Figures - we present two additional flavors of algorithms: conditional algorithms which safe-guard against unachievable goals, and percentile-based algorithms which avoid needing explicit property thresholds. §.§ Nanoparticle Synthesis The nanoparticle synthesis dataset consists of discrete samples (pairs of design points and measured properties) from an empirically fit model of the mapping from synthesis conditions (pH, Temperature, Ti(Teoa)_2 concentration, TeoaH_3 concentration, Teoa = triethanolamine) to TiO_2 nanoparticle size (in nm) and polydispersity (%) <cit.>. We added 1% noise to each measurement to simulate noisy acquisition (See Methods).We consider the experimental goal of preparing a library of monodisperse nanoparticles with a series of precisely specified radii; specifically, our aim is to estimate the subset of synthesis conditions that yield nanoparticles with polydispersity < 5% and where the radius falls into an arbitrarily chosen set of disjoint buckets [6.5, 10,15, 17.5,20,30] ± 0.5 nm. Such tasks are important as monodisperse nanoparticles of different sizes can be optimal for different catalytic reactions <cit.>. It is important to note that this problem is distinct from constrained multi-objective optimization as the goal is to map out all possible syntheses which meet the user specifications. For this example, the user-defined algorithm corresponds to straightforward filtering logic to select the set of disjoint regions which match the stated goal (see Algorithm <ref>). We benchmarked the performance of RS, US, EHVI, MeanBAX, InfoBAX, and SwitchBAX on the library estimation task, usingandas metrics (Figure <ref>A-B). Error bars in these plots correspond to 20 repeats of data acquisition starting with different sets of ten initial points. The BAX strategies outperform RS, US and EHVI in terms of the realistic-setting metric () and perform similarly to US in terms of thehere. On average, InfoBAX gives superior long term performance relative to MeanBAX for both metrics. However, MeanBAX performs well initially in terms of themetric. The SwitchBAX algorithm appears to perform well across both dataset size regimes. The measured properties corresponding to the design points collected under US and InfoBAX are shown in Figure <ref>. Here, US samples widely in property space and not necessarily in the subset of interest. In contrast, InfoBAX typically samples in regions close to the target subset of points (gold diamonds), showing the effectiveness of user-directed acquisition. Sampling in measured property space for RS, EHVI and MeanBAX are shown in Figure .A t-distributed stochastic neighbor embedding (TSNE) visualization of the sampling in design space is shown in Figurefor US and SwitchBAX. We also characterized the performance of the acquisition strategies under conditions of higher noise (5%) on the measured properties. Under these conditions, it takes longer to obtain all the target design points for all acquisition strategies. In addition, themodel is less confident about the location of the target subset of the design space (lowerrelative to 1%). MeanBAX exhibits higher variance in , while InfoBAX and SwitchBAX appear to be relatively robust to different initializations. Results for additional noise levels (0% and 10%) are shown in Figure . §.§ Magnetic Property Estimation The magnetic materials characterization dataset consists of a design space of 921 ternary compositions approximately evenly spaced across the ferromagnetic Fe-Co-Ni ternary alloy system <cit.>. The output measured properties for each composition are the Kerr rotation and the coercivity. The Kerr rotation is a surface-sensitive measure of a material's magnetic properties. Searching for materials with high Kerr rotation is a route to discovering materials for erasable optical recordings <cit.>. Coercivity is the field required in a hysteresis loop to completely demagnetize a ferromagnet. The higher the coercivity the less susceptible a particular magnetization state is to flipping due to defects or other mechanisms. For this dataset, we highlight two algorithms: the Multiband Algorithm (an intersection of two level bands) and the Wishlist Algorithm (a composition of multiple multibands). §.§.§ Multiband Algorithm The Multiband Algorithm aims to estimate the region of the design space where the measurable properties falls within a separate user-defined band. This goal can be simply expressed by a filtering algorithm which checks for the intersection of the target subsets for each measured property (see Algorithm <ref>). Here, the stated experimental goal is to determine the set of design points for which the coercivity falls in the [2.0, 3.0] mT range and the Kerr Rotation falls in the [0.3, 0.4] mrad range; we employ a shorthand [[a, b], [c, d]] =[[2.0, 3.0], [0.3, 0.4]] to describe this region. Similarly to the nanoparticle synthesis example, goal-driven acquisition functions (InfoBAX and MeanBAX) perform well relative to RS, US, and EHVI (Figure <ref>A). EHVI exhibits notably poor performance as it targets a disjoint partition of the design space. Again in this example, we see that MeanBAX performs the best in the short term, while InfoBAX has superior long term performance. Here, it is worth noting that although the desired region is tightly clustered in measured property space, it is more disperse in the design space (Figure <ref>A). §.§.§ Wishlist Algorithm The Wishlist Algorithm is a composition (or union) of a series of multibands. It addresses the case that a user may have a variety of experimental goals to realize in an experimental system (but not necessarily mapping). In this specific example, we target the following sets of multiband regions: [[2.0,3.0], [0.2, 0.3]] or [[4.0,6.0], [0.2, 0.4]] or [[9.0, 10.0], [0.0, 0.1]] or [[3.0,4.0], [0.7, 0.8]]. Here, it is notable that the ground-truth target subset is disjoint in design space, making this problem significantly more challenging than the multiband scenario. MeanBAX, InfoBAX, and SwitchBAX again perform well relative to RS, US, and EHVI. Note, in particular, the BAX strategies have a much higherrelative to US (Figure <ref>B).For the given wishlist example, there exists at least one design point which falls into each of the separate multibands. However, in practical experiments, there are scenarios where satisfying the experimental goal is actually unachievable (i.e. there do not exist any design points which satisfy the goal for one or many multibands). We consider this case in more detail in Supplementary Section <ref> section and Figurewhere we showcase a more robust type of algorithm (conditional algorithms) that is capable of dynamically switching strategies based on whether themodels predict whether the goal is achievable. Notably, this non-trivial change in sampling behavior is enabled by only a minimal change to the algorithm. § DISCUSSION Efficiently exploring a design space to find materials candidates with precisely specified measured properties is of fundamental importance to future materials innovation and discovery. While there are existing approaches for finding certain target subsets of the design space, such as Bayesian optimization for identifying global minima or Uncertainty Sampling for full-function estimation, the general task of subset estimation has not been studied within a materials context. In this study, we present a multi-property version of Bayesian Algorithm Execution (BAX) to develop sequential decision-making strategies aimed at estimating user-specified target subsets of the design space.Users can encode their target subset using a simple algorithm that requires only a few lines of code. This algorithm is then automatically converted to goal-aware acquisition functions (InfoBAX, MeanBAX, and SwitchBAX) capable of goal-aware exploration. We evaluated BAX strategies on datasets from the fields of nanoparticle synthesis and magnetic materials. For each case, we retrospectively analyzed the performance of different acquisition strategies using metrics that characterize the number of successful experiments () and that characterize the quality of the predictive models in the ground-truth target subset (). For the nanoparticle synthesis example, we target a non-trivial experimental goal: determining synthesis conditions to develop a library of monodisperse nanoparticles. We observed that the BAX strategies significantly outperformed goal-agnostic RS and US (Figure <ref>A-B) in terms of . This result highlights that incorporating the experimental goal into an algorithm allows for a more targeted and efficient sequence of experimental measurements. Here, EHVI, an algorithm designed for an alternate goal (Pareto front estimation), seems like a reasonable strategy for theand themetrics. However, EHVI performs worse onrelative to the BAX strategies, mainly because it misses target points with low polydispersity but high nanoparticle size, due to goal misalignment (Figure ). In this case, both US and EHVI perform well with respect to the , indicating that in this example, a trainedmodel is able to learn a good overall model of the search space in a small number of queries. For this specific example, the mapping is relatively smooth as it is derived from a model fit on experimental data and therefore a genericmodel has substantial predictive power across the full design space. This is not generally true for more complex datasets.For the magnetic materials dataset (real experimental measurements), we introduce two tasks: multiband and wishlist estimation. Once again, the BAX acquisition functions perform favorably when compared to RS, US, and EHVI in terms of . Additionally, the BAX strategies demonstrate superior performance on themetric. This suggests that while techniques such as US and RS can effectively reduce uncertainty across the entire design space, they may not target the reduction of uncertainty in specific regions of interest. In contrast,models trained on data acquired from BAX sampling strategies are, by construction, more accurate and confident in the specific target subset, forgoing accurate modelling in the rest of the design space. This key result highlights that efficient data collection requires targeted design space sampling. In addition to the Multiband and Wishlist Algorithms, we also compare BAX strategies against a state-of-the-art acquisition function designed for a specific goal. In Supplementary Section <ref> and Figure , we compare BAX methods against EHVI for the task of Pareto front optimization. We find that all approaches return similar results for this dataset, highlighting that, even in cases where acquisition functions have been designed for a given task (i.e. EHVI for Pareto front optimization), algorithm-based approaches can perform comparably. Although both MeanBAX, InfoBAX, and SwitchBAX are acquisition functions for the task of Bayesian Algorithm Execution, they exhibit qualitatively different behaviors. We generally see that MeanBAX tends to perform better in the short term, but takes longer to fully estimate the target subset of interest (Figures <ref>-<ref>). This finding can be rationalized under the exploitation/exploration trade off. MeanBAX is more exploitative by design due to its use of the posterior mean function. In cases where the posterior mean prediction closely models the true function, MeanBAX will acquire a target point at each iteration. However, the long term performance of MeanBAX may not be optimal due to the earlier exploitative queries hindering a detailed understanding of the entire target subset. For experiments which involve low automation and short experimental budgets, such as human-intensive nanomaterials synthesis, a strategy like MeanBAX may be preferable to InfoBAX to quickly obtain solutions that match user specifications. Conversely, InfoBAX, derived from posterior function samples, captures the uncertainty in model predictions and is therefore more successful at exploring the entire target subset. In the presence of noisier data or under-fit models, explorative acquisition functions are also expected to be more robust. We observe this phenomenon in the noise analysis of the nanoparticle synthesis dataset in Figure <ref>B and Figure . By construction, InfoBAX performs experiments to gain information about the location of the target subset. For this reason, InfoBAX sometimes queries points outside the target subset in order to better understand the overall shape of the target region; practically, this could mean that themetric suffers at the expense of potentially improving the . Applications involving high-throughput synthesis and characterization or facilities with self-driving laboratories <cit.> may favor the exploratory InfoBAX approach.We combine the favorable short and long-term performances of MeanBAX and InfoBAX through the dynamic and parameter-free SwitchBAX strategy. Here, the SwitchBAX strategy performs MeanBAX unless there are either (1) no predicted target points or (2) the predicted target points have already been measured. Under either of these scenarios, the strategy switches to InfoBAX. We find that this approach yields the best overall performance for both the small and medium data regimes considered in this work. Interestingly, we also observe a case in Figurewhere initially InfoBAX outperforms MeanBAX. In this scenario, SwitchBAX still performs well; this finding indicates that initial sampling based on InfoBAX can assist later MeanBAX performance. In general, we expect SwitchBAX to outperform MeanBAX since defaulting to InfoBAX is better than defaulting to US; in other words, it is better to explore with a purpose rather than to perform general exploration. However, it is possible that InfoBAX could outperform SwitchBAX for other datasets or user-algorithms, which is an important point to study in future work. While BAX strategies generally outperform RS, US, and EHVI for specific target subsets, it may be possible to develop task-specific acquisition functions (like EHVI for Pareto front estimation) that yield equivalent or superior performance. However, creating such acquisition functions requires time and often substantial mathematical insight. Furthermore, these acquisition functions may only be applicable in specific, one-off settings. The power of the BAX framework lies in abstracting custom acquisition function development from the user, making it more accessible for experimentalists to employ specifically targeted search strategies. While designed for materials, our method is directly applicable to other fields. We anticipate that our approach will find broad application across the natural and physical sciences in problems involving multidimensional design and property spaces.§.§ Code Availability We provide a user-friendly implementation <cit.> of the three Bayesian algorithm execution strategies at <https://github.com/src47/multibax-sklearn>. This repository contains tutorial notebooks to aid in guiding real experiments. The GPflow code and generated data <cit.> for this study are also available at <https://github.com/src47/materials-bax-gpflow>.§ METHODS §.§ Modelling Independentmodels with zero prior-mean functions and a squared-exponential covariance functions (kernels) were used to model the mapping from the design space to each normalized measured property. Themodelling was performed using GPflow <cit.>. In addition, Automatic Relevance Determination (ARD) was used, which assigns a different lengthscale, l_1:d, to each design variable (X_i)(Equation <ref>). The exponential kernel encourages the model to predict similar values of the measured property in local regions of the design space. The lengthscale is a hyperparameter which controls the scale of this smooth behavior; a small lengthscale allows theto capture large changes for small design space displacements. The kernel variance hyperparameter, α_1:m, controls the allowable height of each of the m predicted properties from themodels. The likelihood variance σ_1:m, a hyperparameter which models noise on the measured properties was used and fixed to one of the following constant values (0.0, 0.01, 0.05 or 1.0),k(x, x')=α_m exp(-(x -x' )^2/2 ℓ_d^2) hyperparameters for the lengthscales and kernel variances were fit using five-fold cross validation using the log likelihood as the optimization metric. An adaptive hyperparameter fitting scheme was employed in which the hyperparameters were re-fit every ten data points collected.Each design variable was normalized to the range (0, 1) using min-max scalarization. This normalization is possible as the design space is assumed to be discrete and fully specified and enumerated. Measured properties were normalized to the range (-1, 1). Here, maximum and minimum ranges were estimated based on domain knowledge. For the nanoparticle synthesis dataset, the nanoparticle sizes and polydispersities were assumed to fall in the range of [0, 30] nm and [0, 30] %, respectively. For the magnetic optimization dataset, the measured properties fall between [0,1] mT and [0,10] mrad for the Kerr rotation and coercivity, respectively.§.§ Datasets§.§.§ Nanoparticle synthesisThe nanoparticle synthesis dataset consists of 1997 random settings for the variables x = [x_1, x_2, x_3, x_4] (normalized Ti(Teoa)_2 concentration, TeoaH_3 concentration, pH and T) from an empirically fit model <cit.> for the nanoparticle radius (y_1) and the polydispersity (y_2) as a function of synthesis parameters:y_1= 19.36549 - 0.2797x_1 + 1.56885x_2 + 3.5447x_3 + 1.82225x_4- 1.1978x_1x_2 - 1.66594x_1x_3 - 1.62873x_1x_4 - 0.02003x_2x_3- 0.001268x_2x_4 - 0.35086x_3x_4 + 0.3914x_1^2 + 0.52265x_2^2 - 0.81701x_3^2 - 2.74921x_4^2 y_2= 19.6114239+1.0313718x_1+1.48527x_2+1.7991534x_3-4.1983899x_4+1.4263262x_1x_2-0.4279443x_1x_3-1.3865203x_1x_4 -1.051601x_2x_3-2.06380x_2x_4-2.476674x_3x_4-0.4497319x_1^2 -1.8040123x_2^2 -3.8699325x_3^2-2.6148x_4^2 Gaussian noise with σ = 0.01 or 0.05 was added to the normalized values for y_1 and y_2 at the point of measurement. Figurealso shows cases with noise levels of 0.0 and 0.1. §.§.§ Magnetic Property EstimationThe magnetic materials dataset corresponds to 921 compositions from the Fe-Co-Ni ternary alloy system <cit.>. The composition values for each element range from [0, 100]. For each ternary composition, two materials properties are measured: Kerr rotation (mrad) and the coercivity (mT).§.§ Sequential Design of ExperimentsData acquisition strategies were compared for the Library, Multiband, and Wishlist Algorithms. We used the Trieste EHVI implementation for multi-objective bayesian optimization <cit.>. In general, the following settings were used: 10 random initial datapoints, 20 experimental repeats, adaptive hyperfitting every 10 iterations, and prevention of requerying design points. In Supplementary Section <ref> and Figurewe also show sampling results forfor fixed hyperparameters. The InfoBAX and SwitchBAX strategies used 15 posterior samples from themodel for algorithm execution. 300 and 500 datapoints were acquired for the nanoparticle synthesis and magnetic materials datasets, respectively. Visualization of design space sampling use the python-ternary package <cit.>.§.§ AcknowledgementsThis work is supported in part by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-76SF00515. AR and FHJ acknowledge funding from the National Science Foundation (NSF) program Designing Materials to Revolutionize and Engineer our Future (DMREF) via a project DMR-1922312. CJT was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under SLAC Contract No. DE-AC02-76SF00515. The authors thank D. Boe, C. Cheng, S. Gasiorowski, J. Gregoire, T. Lane, W. Michaels, Y. Nashed, M. Robinson, R. Walroth, and C. Wells for manuscript feedback. The authors acknowledge the use of ChatGPT to streamline software development.§.§ References § SUPPLEMENTARY TEXT §.§ Additional Algorithms for Materials Subset EstimationIn this section, we briefly describe two other useful flavors of algorithms developed here for searching materials spaces that are enabled by the BAX framework: percentile and conditional algorithms. §.§.§ Percentile Algorithms Percentile algorithms are distinct from algorithms which use pre-defined thresholds or bounds for measured properties. In this setting, the thresholds are not fixed (by value) explicitly.Such algorithms are useful when it is unknown what specific thresholds should be chosen for a design space. For example, instead of stating the goal: “Find all materials in the design space with measured property value greater than 5.0”, it is possible to ask “Find the subset of the design space corresponding to the top 10% of measured property values”. For a design space with 100 material candidates, for case 1, it is possible that the target subset is empty. However, under case 2, the size of the target subset is exactly equal to 10 (for a single-property case). As an example, a possible percentile algorithm is to return the subset of the design space corresponding to the top t_1% of measured property 1 and/or the top t_2% of measured property 2 (Algorithms <ref>-<ref>). An example of data acquisition targeting a percentile union is shown in Figure <ref>. Such a formalism is easily extended to a larger number of measured properties. However, one subtlety is that when there are more than two measured properties, there are multiple ways to combine the intersection and union operations. The BAX framework is agnostic to these user-choices. As long as subset filtering logic can be written, the corresponding request can be automatically converted into an acquisition function. §.§.§ Conditional Algorithms One specific limitation of intersection algorithms (both the Multiband and Percentile algorithms) is that there may not be any points in the design space which jointly satisfy both band or percentile thresholds. For example, this could happen if the set of points corresponding to property 1 and 2 were disjoint. Prior to experimentation, it is typically unknown whether this set will be empty or not. We introduce a strategy based on conditional logic for increased robustness of data acquisition. Below, we describe a conditional algorithm which returns a multiband (intersection of two level bands) if it exists and otherwise returns target points corresponding to a level band in only one of the properties.Note, an algorithm executing on the ground-truth function will always yield the same output (i.e. the intersection either exists or it doesn't). However, when executing and algorithm on the posterior mean or draws from the trainedmodel, either condition may be achieved. Data acquisition is then suggested in a manner which aids in the determination of which condition actually holds. As a concrete example: one could specify the strategy of finding monodisperse 4nm particles if they are achievable given ranges on synthesis parameters and otherwise simply isolating conditions corresponding to monodisperse particles of any size. An example of a Conditional Multiband algorithm where the primary goal is achievable and unachievable is shown in Figure <ref>. Crafting an algorithm in this way allows a user to bake in a fail-safe in situations where the experimental goal may be unachievable. This conditional logic can be extended to a hierarchy of conditions and is the subject of future work.§.§ Comparison Against an Acquisition Function Designed for a Specific TaskIn general, the task of acquisition function development is challenging and quite technical. The advantage of the BAX approach is that a significant portion of this development is abstracted from the user. However, in this section, we sought to compare the performance of the BAX strategies on a multi-property task for which optimized acquisition functions already exist. We chose the task of Pareto front estimation (i.e. find the ground-truth target subset which corresponds to optimal trade offs in the measured properties). For this task, EHVI is one of several state-of-the-art algorithms. In Figure <ref>, we compare EHVI against MeanBAX, InfoBAX, and SwitchBAX and find that all approaches give similar long-term performance; interestingly, MeanBAX, and SwitchBAX actually outperform EHVI at low dataset sizes for this specific dataset. This important result shows that there is not necessarily a disadvantage in using a BAX strategy (in terms of the quality of collected samples), relative to a custom acquisition function. However, it is important to note that EHVI is a well optimized and differentiable acquisition function which is likely faster and more computationally efficient for large datasets or high-dimensional optimization.§.§ Data Acquisition Using a Gaussian Process model with fixed hyperparametersThis section considers the algorithms presented in the main text (Library, Multiband, and Wishlist) under conditions of amodel with pre-fixed hyperparameters. Note, in the main text, results were presented using hyperparameters which were fit adaptively. While the adaptive hyperfitting scheme represents a more practical data collection scenario, it somewhat confounds the direct comparison between acquisition functions. Under an adaptive scheme, the model hyperparameters at any given iteration depends on the specific set of data collected. This means that the difference in performance between different sampling strategies could be ascribable to the combination of the acquisition function and the model, rather than just the acquisition function. Therefore, we consider an idealized scenario in which we instead fit thehyperparameters using 5 fold cross validation on the entire dataset and pick the hyperparameters corresponding to the largest log likelihood. These hyperparameters are fixed throughout data collection, allowing for a direct comparison between acquisition functions. The results for the three subset algorithms are shown in Figures <ref>A-C. We observe similar behavior to the adaptive hyperfitting case, with the goal-aware BAX strategies outperforming RS, US and EHVI. § SUPPLEMENTARY FIGURES | http://arxiv.org/abs/2312.16078v1 | {
"authors": [
"Sathya Chitturi",
"Akash Ramdas",
"Yue Wu",
"Brian Rohr",
"Stefano Ermon",
"Jennifer Dionne",
"Felipe H. da Jornada",
"Mike Dunne",
"Christopher Tassone",
"Willie Neiswanger",
"Daniel Ratner"
],
"categories": [
"cond-mat.mtrl-sci",
"physics.data-an"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231226145706",
"title": "Targeted materials discovery using Bayesian algorithm execution"
} |
http://arxiv.org/abs/2312.16081v1 | {
"authors": [
"Seyed Naseh Sajadi",
"Mohsen Khodadi",
"Orlando Luongo",
"Hernando Quevedo"
],
"categories": [
"gr-qc"
],
"primary_category": "gr-qc",
"published": "20231226150935",
"title": "Anisotropic Generalized Polytropic Spheres: Regular 3D Black Holes"
} |
|
Analysis of a nonconforming finite element method for vector-valued Laplacians on the surface Carolin MehlmannInstitute of Analysis and Numerics, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany, [email protected]=============================================================================================================================================================================Recently a nonconforming surface finite element was developed to discretize 2D vector-valued compressible flow problems in a 3D domain. In this contribution we derive an error analysis for this approach on a vector-valued Laplace problem, which is an important operator for fluid-equations on the surface. In our setup, the problem is approximated via edge-integration on local flat triangles using the nonconforming linear Crouzeix-Raviart element. The flat planes coincide with the surface at the edge midpoints. This is also the place, where the Crouzeix-Raviart element requires continuity between two neighbouring elements. The developed Crouzeix-Raviart approximation is a non-parametric approach that works on local coordinate systems, established in each triangle. This setup is numerically efficient and straightforward to implement. For this Crouzeix-Raviart discretization we derive optimal error bounds in the H^1-norm and L^2-norm and present an estimate for the geometric error. Numerical experiments validate the theoretical results.§ INTRODUCTIONLow order nonconforming finite element discretizations play an important role for the approximation of geophysical flow problems on the surface of the Earth <cit.>.On the used meshes in climate models this type of discretization isa good compromise between accuracy and efficiency of the numerical setup <cit.>. The analysis of a nonconforming finite element discretization for the vector-valued surface Laplace, which forms an essential part of fluid equations in climate applications, is the topicwhich is treated in this paper. In the last years there is an increasing interest in theanalysis of finite elements for surface fluid equations, e.g. <cit.>. Considering the error analysis most works focus on viscous surface flows.Several papers derive error bounds for finite element discretizations of the surface Stokes problem <cit.>. The majority of authors consider H^1-conforming finite elements and weakly enforce the tangentiality of the velocity field by a Lagrange multiplier or a penalty method <cit.>.Other approaches give up the H^1-conformity but realize the tangentiality by construction of the elements <cit.>. A third option studied in the literature are surface finite elements based on stream functions <cit.>.An important aspect in the analysis of finite element methods for flow problems on surfaces is to ensure that the numerical approximation of the flow is tangential to the surface. This difficulty already arises in the analysis of the surface Laplace equation.Vector-valued Laplace problems on the surface are easier to analyze than the Stokes problem because the equation consists only of the velocity vector and no pressure unknown. Therefore, the vector-valued Laplace equations are a useful simplification of more complex surface flow problems for the finite element analysis. In the work of <cit.>finite element methods are studied for a surface vector Laplace problem and optimal error bounds are derived. The tangentiality condition of the vector-valued velocity field is enforced either by a penalty term or an Lagrange multiplier.In this paper we analyze a recently developed nonconforming Crouzeix-Raviart approximation <cit.> for the surface Laplace equation, which has been also studied in <cit.>. In this setup the problem is discretized in local coordinate systems based on flat instead of curved mesh elements (triangles). A similar approximation has been applied to discretize the scalar valued shallow water equation <cit.>. The usage of flat instead of curvedmesh elements introduces a geometrical error. In this setup the error coming from the geometry is balanced with the first order approximation of the nonconforming finite element. This aspect has been analyzed for the scalar Laplace problem in<cit.>. In this contribution we derive an analysis of a vector-valued Laplacian based on covariant derivatives of the tangential fields. The problem includes the numerical treatment of the tangential condition of the velocity field. We will present an error analysis exactly for the numerical approach described in <cit.>. The main contributions of the paper are the following. We derive a finite element variational formulation of the problem as well as a-priori error estimates. The derivation shows optimal error bounds and presents an estimate for the geometrical error. The theoretical findings are validate with numerical results.The considered Crouzeix-Raviart approach is very different to other vector-valued finite element approximations on the surface. The method does not need an extra condition to enforce the tangentiality of the velocity field as it is common in many other approaches cited above. By construction the discrete velocity field is always in the corresponding tangential plane. Unlike other penalty free surface elements <cit.>,which discretize the incompressible Stokes equation withH^1(÷)-conforming finite elements, we consider the Crouzeix-Raviart approach as a nonconforming finite element in the H^1-setting with the aim to apply the discretization to compressible non saddle-type fluid problems as in <cit.>. The finite elements studied in <cit.> are H^1-nonconforming nodal elements. However the error analysis derived in the paper of <cit.> does not present an L^2-estimate, which is given in our contribution.The construction of our Crouzeix-Raviart approximation differs from other penalty free surface elements. In the work of <cit.> the curved geometry is approximated by apolyhedron, where the vertices are aligned to the surface. In each triangle of the polyhedron a finite element function is set up via a Piola transformation. In contrast to that, we consider an approximation that coincides with the surface at the edge midpoints. This is also the position where the Crouzeix-Raviart element requires continuity across the mesh elements. To avoid transformations, we set up a local basis in each plane. The basis is achieved by the projection of the outward pointing surface normal vector into the plane. A sketch of the setting is shown in Figure <ref>. The discretization is straightforward to implement and benefits from its efficiency, see for instance<cit.>.The paper is structured as follows. Section <ref> presents the vector-valued Laplace problem and introduces the surface derivatives. The nonconforming Crouzeix-Raviart approach is outlined in Section <ref>. The error analysis is carried out in Section <ref>. The main results of the section are the optimal error bounds in H^1-norm and L^2-norm as well as the estimates of the geometrical error. Numerical examples to validate the theoretical results are presented in Section <ref>. The paper closes with a conclusion in Section <ref>.§ PRELIMINARIES §.§ Surface, derivatives and normsWe consider a oriented connected, C^∞ smooth surface in ℝ^3 with ∂Γ=∅. The signed distance function of Γ is given by d(x). Using the distance function we define the unit outward pointing normal vector and the Weingarten map as(x)=∇ d(x), (x)=∇(x)=∇^2 d(x),where ∇ is the standard gradient inℝ^3. We introduce a neighborhood, a strip around Γ with distance δ, asU={ x ∈ℝ^3 : dist(x,Γ)≤δ},where dist(x,Γ) is the Euclidean distance between x and Γ. Let δ be small enough such that a unique closest point mapping p(x) from U →Γ existswithp(x)=x-d(x)(p(x)).The extension for a scalar function ϕ̃ from Γ to its neighborhood Γ_h ⊂ U is defined along the normal directions as ϕ̃(x)=ϕ(p(x)),∀ x ∈ U.Analogously the lift ofa function ϕ:Γ_h →ℝ on Γ_h ⊂ U to Γ is given as ϕ^l(x)=ϕ(η(x)),∀ x ∈Γ,where η(x) is the unique solution of x=p(η)=η-d(η)n(x). Throughout the paper we apply a component-wise lifting and extension of vector-valued functions. To define the surface derivatives we introduce the projection into the tangential plane as=-^T, =(^ij),i,j=1,...,3. where the subscript T indicates the transpose. The surface gradient is defined as Dϕ:=∇ϕ̃,D_iϕ=∑_k∇_i ϕ̃-∇_k ϕ̃^i ^k,where ∇_k ϕ̃=δϕ̃/δ x_k. The covariant derivatives of a vector filedare defined as ∇_Γ:= ∇,(∇_Γ)_ij=∑_k P^ik D_j ^k.Similar to <cit.> weintroduce the divergence operatorfor vector-valued : Γ→ℝ^3 and matrix valued objects A:Γ→ℝ^3 ×ℝ^3 as÷_Γ =(∇_Γ)=(∇), ÷(A): =( ÷_Γ(_1 ^ T A), ÷_Γ(_2^ T A), ÷_Γ(_3^ T A)),where _1,_2,_3 are the unit vectors of ℝ^3The surface Laplacian is defined byΔ_Γ: =÷_Γ(∇_Γ). This is the so called Bochner Laplace, which is also treated in <cit.>. In differential geometry and in exterior calculus a different Laplace operator, the Hodge Laplace, is considered. Another surface Laplace operator based on the symmetric gradient ∇ +∇^T is analyzed in <cit.>. In the following, we introduce the surface Sobolev space of k times weakly differentiable scalar functionsand the corresponding norm asH^k(ω):={ϕ∈ L^2(ω)| D^α_Γϕ∈ L^2(ω), |α| ≤ k}, ϕ_H^k(ω)=(∑_α≤ k D^α_Γϕ^2_L^2(ω) )^1/2, whereω⊂Γ. We write^k(Γ):= (H^k(Γ) )^3 for vector-valued functionsin ℝ^3.The space of the vector-valued functions that are tangential is named as _tan^k(Γ):={∈^k(Γ)|·=0}.§.§ Model problemThe vector-valued problem based on the Bochner Laplace is given as -÷_Γ (∇_Γ) + =,where the zero order termhas been added to the left hand side of the equation to avoid technical detailsrelated to the kernel of the Laplace operator. The corresponding weak formulation reads as: Find∈_tan^1(Γ) s.t. a(,)=l(), ∀∈_tan^1(Γ),with a(, )=(∇_Γ, ∇_Γ)_Γ + ( ,)_Γ,, ∈_tan^1(Γ) and l()=(,)_Γ. Note that(·, ·) is the standard L^2-inner product andf ∈^-1_tan(Γ). The Bochner-Laplace equation is analyzed in <cit.>. In the study the authors demonstrated by the Lax-Milgram theorem the uniqueness ofequation (<ref>) and derive the following regularity estimates_^2_tan(Γ)≤ c _L^2(Γ),where c is a generic positive constant.§ THE NONCONFORMING FINITE ELEMENT APPROXIMATION ON THE SURFACE We denote by Γ_h=∪_i∈ℕ K_i⊂ U the triangulation of Γ. The triangulation is shape regular and quasi uniform with the maximal diameterh=max_K∈Γ_hdiam(K). The set of all edges of Γ_h is given by E_h. Let m_E be the edge midpoint of an edge E∈E_h and M_h be the set of edge midpoints of E_h. The triangulation is made such that all edge midpoint lie on Γ. The outward pointing unit normal vector to Γ_h on K is given by _h.On each element K the outward pointing normal vector is calculated by_h= 1/3m_E_1+m_E_2+m_E_3/m_E_1+m_E_2+m_E_3. Using _h the projection onto the tangent space of Γ_h is defined by_h=-_h _h^T.Any edge midpoint m_E ∈E_h is shared by two triangles K^+ and K^-.We denote the conormal of an edge E to K^± by ^±_E.To anyK ∈Γ_h a curved triangle K^l=p(K) on Γ is related such thatthe set of curved triangular faces is defined as Γ^l_h={K^l: K ∈Γ_h } with Γ= ∪_K^l ∈Γ^l_h K^l. The conormal of a curved edge E^l=p(E) is named ^±_E^l. On each element K a local basis is realized. Therefore, three-dimensional tangential surface vectors (_E^l, _E^l) are projected into the plane K by_E_i= P _E_i^l,_E_i= P _E^l_i, i=1,..,3,where P is the orthogonal projection defined in (<ref>).A sketch of the projected vectors is shown in the left panel of Figure <ref>. The space of the Crouzeix-Raviart functions consistsof the piecewise linear functions that are continuous at the edge midpoints m_E:V_h = { v_h ∈ L^2 (Γ_h): v_h|_K ∈ℙ^1(K) , where v_his continuous inm_E ∈M_h }.ℙ^1 is the set of the linear polynomials on K. The coupling of two Crouzeix-Raviart basis functions on two neighboring elements is shown in the right panel in Figure <ref>. In each edge midpoint m_E_i, i=1,2,3 of a triangle K the linear finite element basis function is established asψ_i|_K(m_E_j) =δ_ij, ∀ i,j=1, ..,3.The jump of a vector-valued function _h ∈_h:=( V_h )^3 across an edge E is defined as[]= lim_s → 0_+( (x-s ^+_E) - (x-s_E^-) ) .It holds that ∫_E [_h]d σ_h=0, _h ∈_h.The discrete space of the tangential velocities is defined as^tan_h:={_h ∈_h|_h ·_h=0}, where _h is the unit outer normal an a triangle K. Analog to (<ref>)we introduce the discrete surface gradient for ϕ_h ∈ V_h asD^hϕ_h:=_h ∇ϕ_h,D_i^h ϕ_h=∑_k ^ik_h ∇_k ϕ_h. In the same manner, the discrete vector-valued derivativefor any _h ∈^tan_h is defined as∇_Γ_h_h =_h(x) ∇_h(x) _h(x),x ∈Γ_h, (∇_Γ_h_h)_ij= ∑_k _h^ikD_j^h(_h^k). The discrete analog to (<ref>) reads as: Find_h ∈^tan_h s.ta_h(_h,_h)=l_h(_h)∀_h ∈^tan_h,with the bilinear formand the linear functionala_h(_h, _h)= ∑_K ∈Γ_h( ∇_Γ_h_h,∇_Γ_h_h )_K + (_h,_h)_Γ_h,l_h(_h)=( , _h)_Γ_h.We state the following relationship between the vector-valued gradient and the Laplace operator.Applying Green's formula to the first term of a_h(_h,_h) and making use of (<ref>) gives( ∇_Γ_h_h,∇_Γ_h_h )_K= ∫_K _h^ik D^h_j _h^k _h^il D^h_j _h^l= ∫_K{ D^h_j^i_h D^h_j_h^i-_h^i _h^j D^h_j ^k_h D^h_j ^i_h- D^h_j _h^i^i_h _h^l D^h_j _h^l + _h^i_h^k D^h_j _h^k _h^i _h^l D^h_j _h^l }= ∫_K _h^ik D_j _h^k D_j _h^l= -{∫_K _h^ik D_jD_j _h^k _h^i + ∫_δ K_h^ik D_j _h^k _E^j _h^i }. For better readability we refrain from writing down the summation over individual components of the matrix. In a morecompact notation we get( ∇_Γ_h_h,∇_Γ_h_h )_K= - (_h ÷_Γ(∇_Γ)_Γ_h, _h )_K+∫_∂ K (∇_Γ_h_h) _E ·_h.Based on the weak formulation, the broken H^1 semi-norm is defined as |_h|^2_H^1(Γ_h)=∑_K∈Γ_h∇_Γ_h_h ^2_L^2(K).The energy norm is given by _h ^2_h= |_h|^2_H^1(Γ_h) + _h^2_L^2(Γ_h)=a_h(_h,_h).We note that _h _h is a norm on ^tan_h.Based on the Riesz representation theoremthe discrete variational formulation (<ref>) has a unique solution. The discrete problem introduces two nonconformities compared to the original formulation (<ref>), namely thegeometricerror Γ_h ≠Γ and tangential inconsistency _h ≠.§ A PRIORI ERROR ESTIMATES The relation between integrals on Γ and Γ_h is realized with the concept of extended functions.∫_Γ d σ=∫ _Γ_hν_h d σ_h,where ν_h is the line element. In the following we establish norm equivalences to relate functions defined on Γ and the extensions of the functions realized on Γ_h.If ϕ∈ H^2(K), then it holds for any K∈Γ_h thatϕ^l_L^2(K^l)≤ c ϕ_L^2(K)≤ cϕ^l_L^2(K^l), |ϕ^l|_H^1(K^l)≤ c |ϕ|_H^1(K)≤ c |ϕ^l|_H^1(K^l),|ϕ|_H^2(K^l)≤ c ϕ^l_H^2(K), |ϕ^l|_H^2(K^l)≤ c ϕ_H^2(K).The inequalities (<ref>)-(<ref>) are proven in the appendix B of <cit.>.The above equivalences directly translate to vector-valued functions. We proceed with outlining estimates for the nonconforming interpolation. Let E_K be the set of three edges of K ∈Γ_h. The local interpolation operator Π_K: H^1(K) →ℙ_1(K) at the edge midpoint, m_E ∈ E, is defined by(Π_K ϕ)(m_E)= 1/|E|∫_E ϕ dσ_h, ∀ϕ∈ H^1(K),where |E| is the length of the edge E.We extend the local interpolation operator for vector-valued functions ∈ H^1_tan (Γ) to (Π_K _i)(m_E) := 1/|E|∫_E (_h_|_K)_id σ_h,i=1,...,3. Applying the midpoint rule to the scalar case gives∫_E(Π_K ϕ)d σ_h=∫_Eϕ d σ_h, E∈ E_K.The following local interpolation estimate holds <cit.>ϕ-Π_K ϕ_L^2(K)+h_K | ϕ-Π_K ϕ|_H^1(K)≤ c h^2_K |ϕ|_H^2(K),where ϕ∈ H^2(K) and h_K is the diameter of K. We define the global interpolation operator Π_h: H^1(Γ_h) → V_h by(Π_h ϕ)|_K=Π_K ϕ, ∀ K ∈Γ_h.Based on the local error estimate (<ref>), the following global error estimate can be derivedϕ-Π_h ϕ_L^2(Γ_h)+h |ϕ-Π_h ϕ|_H^1(Γ_h)≤ c h^2 |ϕ|_H^2(Γ_h).The interpolation estimates (<ref>) and (<ref>) can be applied component-wise to the vector-valued case. In the following we introduce a relation to compare the derivatives on Γ andΓ_h. To simplify the notation we make use of (x)=(x)-d(x)(x),which is the derivative of the projection (<ref>). It holds that (cf. e.g. <cit.>)===. The ij-th component of the vector-valued gradient is given by (∇_Γ_h)_ij= ∑_k,l_h^ik∂^k/∂ x_l_h^lj=∑_k,l,r_h^ik D_r ^k(p(x)) ∂ p^r/∂ x_l_h^lj=∑_k,l,r_h^ik D_r^k(p(x))(^rl-d(x)^rl)_h^lj=∑_k,l,r_h^ik D_r ^k(p(x))^rl_h^lj =∑_k,l,r_h^ik^rs∇^ks(p(x))^rl_h^lj. The matrix notation of the relation reads as ∇_Γ_h= _h∇ (p(x))_h= _h∇_Γ_h.Together with the geometric relations introduced in Section <ref> and Section <ref>, the following geometric approximation results hold true for our setup.Let Γ_h ⊂ U an approximationof Γ with properties outlined above. Assume that the mesh size h is small enough. Then it holds thatd_L^∞(K) ≤ c h^2,1-ν_h_L^∞(K) ≤ ch^2,-_h ≤ c h, ^±_E^l-^±_E ≤ c h^2.K ∈Γ_h coincides with Γ at the edge midpoints. Thus, the linear interpolation of the distance function d vanishes on K,I_h d|_K=0. Therefore, we getd _L^∞(K)= d-Π_h d_L^∞(K)≤ c h^2,where we use (<ref>) in the last estimate.The proof of (<ref>)-(<ref>) is given in the appendix A of <cit.>. §.§ Energy error estimateThe aim of this section is to derive the energy error in the discrete energy norm. The main tool used in the derivation is Strang's second Lemma. Let∈^1_tan(Γ) be the exact solution of (<ref>),be its extension to U and _h ∈^tan_h be the discrete solution of (<ref>). Then the following estimates holds -_h_h ≤c inf__h ∈^tan_h-_h_h_approximation error + sup__h ∈^tan_ha_h(, _h) - (,_h)/_h_h_nonconformity error. The next lemma introduces geometric error bounds.Letbe the solution of (<ref>) andbe its extension to U. Then the following error estimates hold|(, ^l_h)_Γ-(,_h)_Γ_h|≤ c h^2 _L^2(Γ)_h_L^2(Γ_h), |(,^l_h)_Γ-(,_h)_Γ_h|≤ c h^2 _L^2(Γ)_h_L^2(Γ_h),for any _h ∈_h and ∈_tan^1(Γ),∈ L^2(Γ). We give the proof of the second inequality (<ref>). The derivation of (<ref>) is similar. By the use of (<ref>), (<ref>) as well as (<ref>) we get|(, ^l_h)_Γ- (,_h)_Γ_h|= |(ν_h , _h)_Γ_h- (,_h)_Γ_h|≤ c h^2 _L^2(Γ)_h_L^2(Γ_h) The following estimate bounds the nonconformity error. Letbe the solution of (<ref>) andbe its extension to U defined by (<ref>). Then the following error estimate holds |a_h(,_h)-(,_h)_Γ_h| ≤ c h _H^2(Γ)_h_h, for any _h ∈^tan_h.We reformulate the statement to|a_h(,_h)-(,_h)_Γ_h| = |a_h(,_h)-(,_h^l) +(, ^l_h)-(,_h)_Γ_h|.With the error bound (<ref>) it holds that |(, ^l_h)-(,_h)_Γ_h| ≤ c h^2 _L^2(Γ_h)_h_L^2(Γ_h)≤ c h^2 _L^2(Γ)_h_h≤ c h^2 _H^2(Γ)_h_h,where we apply in the last inequality that =-Δ_Γ +. In the next step, we derive an error bound for a_h(,_h)-(,_h^l) given in(<ref>).Based on (<ref>) we get a_h(,_h)-(f,^l)=-∑_K∈Γ_h( Δ_Γ_h,_h )_K+ (Δ_Γ,^l_h)_p(K)_I_1+∑_K∈Γ_h(,_h)_K-(,^l_h)_p(K)_I_2+ ∑_E ∈ E_h∫_E(_h^+·∇_Γ^+_h_E^+ + _h^- ·∇_Γ_h^-_E^- ) dσ_h_I_3.First, we reformulate I_1 to|I_1|= |∑_K^l∈Γ(÷_Γ (∇_Γ),_h^l )_K^l-∑_K∈Γ_h (_h÷_Γ_h(∇_Γ_h), _h )_K |≤ c ∑_K∈Γ_h|(÷_Γ (∇_Γ),_h^l ±_h ^l_h )_p(K) - ( ÷_Γ_h(∇_Γ_h), _h )_K| ≤ c ∑_K∈Γ_h |(÷_Γ (∇_Γ),( - _h) ^l_h )_p(K) |+ ∑_K∈Γ_h|((÷_Γ (∇_Γ),_h )_K - (ν_h ÷_Γ (∇_Γ),_h _h )_K |+ ∑_K∈Γ_h| ( ÷_Γ (∇_Γ) - ÷_Γ_h(∇_Γ_h), _h )_K|_I_4 ,where we used the triangle inequality and relation (<ref>) in the last inequality. The term I_4 can be bounded as | I_4|≤c |((∇), ^l_h)_K^l - ( (_h±) ((_h ±) ∇( _h ±), _h )_K| ≤c| ((∇), ^l_h )_K^l-( (∇), _h )_K|+ch _H^2(K)_h_L^2(K) ≤ ch _H^2(K)_h_L^2(K),where we applied (<ref>) in the second inequality and estimate (<ref>) in the last inequality.Using the estimate of I_4, the triangle inequality and the bounds (<ref>), (<ref>) we get |I_1| ≤ c h _H^2(Γ)_h_h.We proceed with I_2. Applying the geometric estimates given in (<ref>) we bound I_2 as |I_2|≤ ch^2 _L^2(Γ)_h_h.To derive an estimate for I_3 we make use of the relation_E^l^+=-_E^l^- and getI_3= ∑_E ∈ E_h∫_E(_h^+^+_h)^T(∇_E^+)+(_h^-^-_h)^T(∇_E^-) dσ_h= ∑_E ∈ E_h ( [_h], ∇^+_E )_E +( _h^-,∇ (^+_E ±_E^l) )_E + ( ^-_h, ∇^-_E )_E= ∑_E ∈ E_h([_h],∇^+_E)_E_I_5+∑_E ∈ E_h(^-_h,∇(_E^+ -^+_E^l) )_E+ (^-_h,∇(_E^- -^-_E^l) )_E_I_6.I_5 is bounded in the next step, where we use the relation = and the fact that [Π^0_E _h]=0 for Π^0_E _h=1/|E|∫_E _hd σ_h.I_5= ∑_E ∈ E_h ([_h ], ∇(-d)_E^+ )_E= ∑_E ∈ E_h ([_h ], ∇(-d±_h) _E^+ )_E= ∑_E ∈ E_h([_h -Π^0_E _h],∇( -_h -d_E^+ )_E + ([_h -Π^0_E _h],∇_h_E^+ )_E. With estimates (<ref>), (<ref>) as well as the Poincare inequality and trace inequality we derive the following bounds.∫_E |∇(-_h)|^2 d σ_h ≤ ch^2∫_E∇^2 ≤ h ^2_H^2(K), ∫_E|∇d(x) (x)|^2 d σ_h ≤ c h^3 ^2_H^2(K), ∫_E [_h-Π^0_E_h]^2 d σ_h≤c h( |_h|^2_H^1(K^+)+ |_h|^2_H^1(K^-) )≤ c h _h_h. We apply estimate (<ref>), the trace and the Cauchy-Schwarz inequality as well as the Poincare inequality to the second term of (<ref>) and get([_h-Π^0_E_h], ∇_h_E^+ )_E=([_h-Π^0_E_h],∇ (-Π_h )_E^+ )_E≤(∫_E [_h-Π^0_E_h]^2 d σ_h )^1/2 ( ∫_E | ∇ (-Π_h )|^2 d σ_h )^1/2≤ c ( h |_h|^2_H^1(K^+) + h|_h|^2_H^1(K^-) )^1/2(h ||^2_H^2(K))^1/2≤c h _h_h _H^2(K).The final estimate for I_5 results by the application of the Cauchy-Schwarz inequality to the first term of (<ref>) and the consideration of the estimates (<ref>) and (<ref>).|I_5| ≤ c h _H^2(Γ)_h_h.To derive a bound for I_6, we apply the Cauchy-Schwarz inequality, estimate (<ref>) andthe trace inequality and get |I_6| =∑_E ∈ E_h |( ^-_h, ∇( _E^+-_E^l^+) +∇(_E^- -^-_E^l) )_E |≤ c ∑_E ∈ E_h(∫_E (^-_h)^2 dσ_h)^1/2( ∫_E | (∇( ^+_E-^+_E^l) +∇(_E^- -^-_E^l)|^2 dσ_h )^1/2≤ c ∑_E ∈ E_h(∫_E (^-_h)^2 dσ_h)^1/2 ( ∫_E h^4 |∇|^2 d σ_h )^1/2≤ c h^2 ∑_E ∈ E_h ( ∫_E _h^2 d σ_h )^1/2 ( ∫_E^l |∇|^2 d σ_h )^1/2 ≤c h _H^2(K)_h_h.The final estimates results by the combination of I_1,I_2,I_5,I_6.Letbe the extension ofthe exact solution of (<ref>) to U and _h is the discrete solution of (<ref>). Then the following error estimate holds-_h_h ≤ c h _L^2(Γ).Lemma <ref> as well as the regularity estimate (<ref>) gives sup__h ∈^tan_h|a_h(, _h)- (,^l_h)|/_h_h≤ c h _L^2(Γ). We apply (<ref>) component-wise for . This givesinf__h ∈^tan_h-_h≤-Π_h _L^2(Γ_h)+ |-Π_h |_H^1(Γ_h)≤ c h ||_H^2(Γ_h).The proof is completed by the combination of Strang's second Lemma (<ref>) with the error bounds(<ref>) and (<ref>).§.§ L^2-error estimateThe derivation of the L^2-error estimate is based on the Aubin-Nitsche trick. The the dual problem reads asFind ∈_tan^1(Γ)s.t.a(,) =(,), ∀∈^1_tan(Γ),where =-_h^l ∈ L^2(Γ). The following regularity estimate holds true (cf. <cit.>)._H^2(Γ)≤c _L^2(Γ). The corresponding discrete dual problem reads as find _h ∈_h^tan s.t.a_h(_h,_h)=(_h, )_Γ_h, ∀_h ∈_h^tan. Based on Theorem <ref> the following energy estimate results for the dual solution-_h_h ≤ c h _L^2(Γ).In the next theorem we derive an error estimate for the primal and dual solution. Letbe the solution of the primal problem (<ref>) andbe the solution of the dual problem (<ref>).Then we get a_h(, )-a(, ) ≤ c h^2 _H^2(Γ)_H^2(Γ).To improve readability we define _h =1/ν_h(I-d)_h(I-d)=1/ν_h(-d)__h(- d)_ =1/ν_h_h .Let _h^l be the lifted version of _h. It holds that|-^l_h |≤ ch^2 and | _h -| ≤ c h^2, (cf. <cit.>).In order to make use of this two estimates we first apply (<ref>) and rewrite the first term of (<ref>) to a_h(,) =∑_K(∇_Γ_h, ∇_Γ_h)_K +(, )_Γ_h=∑_K (_h ∇(p(x)) _h, _h ∇(p(x)) _h)_K+(, )_Γ_h=∑_K(_h ∇(p(x)) , _h∇(p(x))_h_h)_K +(, )_Γ_h=∑_K( _h ∇(p(x)) ,_h ∇(p(x)) ν_h _h )_K +(, )_Γ_h=∑_K (_h ∇(p(x)) ,∇(p(x)) ν_h _h )_K +(, )_Γ_h.The transformation from Γ_h to Γ givesa_h(,) =∑_K (_h ∇(p(x)), ∇(p(x))ν_h _h )_K+ (, )_Γ_h= ∑_K^l (_h ∇_Γ, ∇_Γ_h^l)_K^l+ ∫_Γ1/ν^l_h·d σ . In order to apply estimate (<ref>),we reformulate a(,) the second term of (<ref>) toa(,) =∑_K^l (∇_Γ,∇_Γ)_K^l+∫_Γ· d σ.The combination of (<ref>) and (<ref>) givesa(,)-a_h(^e,^e)= ∑_K^l (∇_Γ, ∇_Γ)_K^l -( _h ∇_Γ, ∇_Γ_h^l)_K^l_I+ ∫_Γ(1-1/ν_h^l)· d σ_II. Based on the geometric estimate (<ref>) the second term is bounded as |II| ≤ c h^2 _L^2(Γ)_L^2(Γ). We note that (∇_Γ, ∇_Γ)_K^l -( _h ∇_Γ , ∇_Γ_h^l )_K^l=(∇_Γ, ∇_Γ ( -_h^l) )_K^l+ (( -_h )∇_Γ , ∇_Γ_h^l )_K^l . The combination of (<ref>) and (<ref>) gives|I|≤ c h^2 ∇_L^2(Γ)∇_L^2(Γ). Letbe the solution of the primal problem (<ref>) andbe the solution of the dual problem (<ref>). Then we geta_h(,-Π_h )≤ c h^2 _H^2(Γ)_H^2(Γ),a_h(-Π_h ,)≤ c h^2 _H^2(Γ)_H^2(Γ). We note that Δ_Γ_hΠ_h =0 and that∫_E- Π_h ds_h=0. Thus, we get a_h(,-Π_h ) = a_h(Π_h ,-Π_h)+a_h(-Π_h ,-Π_h )= ∑_K (Π_h , -Π_h)_K-∑_K(Δ_Γ_h( - Π_h ), -Π_h )_K+∑_K (-Π_h , -Π_h)_K+ ∑_E ( ∇^+_h ( -Π_h ) ^+_E+∇^-_h (-Π_h )^-_E, -Π_h )_E_I_1≤ ∑_K ( ∫_K |Δ_Γ_h|^2dσ_h)^1/2(∫_K | -Π_h |^2dσ_h)^1/2+∑_K( ∫_K ||^2dσ_h)^1/2( ∫ |-Π_h |^2dσ_h )^1/2 +I_1,≤c h^2 ∑_K^l ( _H^2(K^l)^e_H^2(K^l)+_H^2(K^l)_H^2(K^l))+I_1,where we apply the Cauchy-Schwarz inequality in the first inequality and estimate (<ref>) in the second inequality.To bound I_1 we use the Cauchy-Schwarz inequality and get I_1=(∫_E| ∇_h^+ (-Π_h ) _E^+ + ∇_h^-(-Π_h ) _E^-|^2dσ_h _I_2 )^1/2(∫_E| -Π_h |^2dσ_h _I_3)^1/2.Based on the trace inequality we reformulate I_3 and I_2 to I_3= ∫_E |-Π_h |^2 dσ_h≤ h^-1| -Π_h |^2_L^2(K)_≤ h^4 ||^ 2_H^2(K) + h |∇ (-Π_h)|_L^2(K)^2_≤ h^ 2 ||^ 2_H^2(K)≤ c h^3 ||^2_H^2(K).I_2= ∫_E | ∇_h^+ (-Π_h ) _E^+ + ∇_h^-(-Π) _E^-|^2 d σ_h ≤h^-1{ |∇_h^+(-Π_h )|^2 _L^2(K)+|∇_h^-(-Π_h )|_L^2(K)^2 }+ h {|Δ^+_h(-Π_h )|_L^2(K)^2+|Δ^-_h -Π_h )|_L^2(K)^2 } ≤h^-1 h^2||^2_H^2(K^-)+ h^-1h^2||^2_H^2(K^+)+ h||^2 _H^2(K^+)+ h||^2_H^2(K^-)≤ c h ||^2 _H^2(K),where we use (<ref>) in the last estimate.This gives us|I_1|≤ c ( h ||^2_H^2(K))^1/2 ( h^3||^2_H^2(K) )^ 1/2≤ c h^2 _H^2(K^l)_H^2(K^l), where we apply (<ref>) in the last step. Finally we get a_h(,-Π_h )≤ c h^2 _H^2(K)_H^2(K).Based on the Lemma above we prove the following estimate for the consistency error.Letbe the solution to the primal problem (<ref>) andbe the solution to the dual problem (<ref>). Then the following estimates holda_h(, -_h) - ((,)_Γ- (,_h)_Γ_h ) ≤ c h^2 _H^2(Γ) _H^2(Γ),a_h(^e-_h,^e) -( (,)_Γ-_h,)_Γ_h) ≤ c h^2 _H^2(Γ)_H^2(Γ).The first and second inequality can be proven analogously.a_h( , -_h±Π_h )-(,)_Γ+(, _h ±Π_h ±)_Γ_h=a_h(, -Π_h )_I_2+a_h( ,Π_h -_h)-(,)_Γ+(,)_Γ_h_I_3+(,-+Π_h )_Γ_h_I_4+( ,_h-Π_h )_Γ_h.We apply Lemma <ref> to I_1=a_h(,Π_h - _h)-(,Π_h -_h)_Γ_h and get|I_1| ≤ c h _H^2(Γ)Π_h ±-_h _h≤ c h _H^2(Γ)(Π_h - _h+-_h_h)≤ c h _H^2(Γ)(h _H^2(Γ)+h_L^2(Γ)), where we applythe energy estimate (<ref>) and Lemma (<ref>) in the last estimate.According to Lemma <ref> I_2 is reformulated to |I_2|=a_h(, -Π_h )≤ c h^2 _H^2(Γ)_H^2(Γ)≤ c h^2 _L^2(Γ)_L^2(Γ),where we use the regularity estimate (<ref>) in the last bound. We apply the geometric estimate (<ref>), the regularity estimate (<ref>)and get|I_3|=|(, )_Γ_h-(,)_Γ|≤ c h^2 _L^2(Γ)_L^2(Γ)≤ c h^2 _L^2(Γ)_Γ. Using Cauchy's inequality, the estimate of the global interpolation error (<ref>)and the regularity estimate (<ref>) gives |I_4|=(,Π_h - )_Γ_h ≤ (∫_Γ_h^2 d σ_h )^1/2(∫_Γ_h |Π_h - |^2 d σ_h )^1/2≤ c h^2 _L^2(Γ)_H^2(Γ_h)≤ c h^2 _L^2(Γ)_L^2(Γ).Letbe the primal solution to (<ref>) andbe the extension to a neighborhood. Then the following error estimate holds -_h_L^2(Γ_h)≤ c h^2 _L^2(Γ).-_h^2_L^2(Γ_h)= (-_h, )_Γ_h =(, )_Γ_h± (,)_Γ -(_h,)_Γ_h± a_h(,)= (, )_Γ_h - (,)_Γ_I_1 + (,)_Γ-(_h, )_Γ_h-a(,) +a_h( , )_I_2-a_h(,) +a(,)_(,) -a_h(,)+a_h( ±_h, )= I_1+I_2-a_h( , )+(,)_Γ-(_h,)_Γ_h+(,)_Γ-a_h(, )+ a_h(-_h, ±_h)+ a_h(_h,)= I_1+I_2-a_h(, )+(,)_Γ-(_h, )_Γ_h+(,)_Γ-a_h(, )+ a_h(-_h, - _h)_I_3 +a_h(_h,)+ a_h( ,_h)-a_h(_h,_h)_(,_h)_Γ_h=I_1+I_2+I_3-a_h(+_h,)+(,)_Γ-(_h,)_Γ_h_I_5-a_h(,+_h)+(,)_Γ-(,_h)_Γ_h_I_4.We apply estimate (<ref>) to I_1 and get |I_1|=((1-ν_h), )_Γ_h≤ c h^2 _L^2(Γ_h)_L^2(Γ_h)≤ c h^2 f_L^2(Γ)_L^2(Γ).According to Lemma <ref>and the regularity estimates (<ref>) and (<ref>) it holds that |I_2|≤ c h^2 _H^ 2(Γ)_H^2(Γ)≤ c h^ 2_L^2(Γ)_L^ 2(Γ).We apply Lemma <ref> and the regularity estimates (<ref>) and (<ref>) to I_4 and I_5 and get |I_4|+|I_5|≤ c h^ 2_H^2(Γ)_H^2(Γ)≤ c h^ 2 _L^ 2(Γ)_L^2(Γ). Finally we reformulate I_3|I_3|= ∑_K(∇_Γ_h (-_h),∇_Γ_h(-_h))_K+(-_h,-_h)_Γ_h ≤ ∑_K∇_Γ_h(-_h)_L^2(K)∇_Γ_h(-_h)_L^2(K)+-_h_L^2(Γ_h)-_h_L^2(Γ_h)≤ c -_h_h -_h_h ≤ c h^2 _L^2(Γ)_L^2(Γ),where we apply the primal (<ref>) and dual energy estimate (<ref>) in the last step. The combination of the bounds for the terms I_1 to I_5 gives the final estimate.§ NUMERICAL RESULTSThe setup is realized in the framework of the climate model ICON <cit.>. ICON uses a triangular mesh. The triangulationis created by a refinement of an icosahedron and a projection of the icosahedral faces onto the surface of the sphere. A sketch is shown in left panel of Figure <ref>. By Γ we denote the surface of the sphere and by ∪ K^l=Γ its triangulation. On each curved triangle K^l we realize the a local reference frame K as described in Section <ref>. In this framework, a sphere with a radius [R=6.371229· 10^6]m is used. The coordinates of the Cartesian system are labeled as x,y,z. We consider the following problem-÷_Γ(ζ/2∇_Γ)+ 1/100=,where viscosity and right hand side areζ=2.75 · 10^13, = -÷_Γ(ζ/2∇_Γ^*)+1/100^*,(^*)^T=(sin(10^-6 Ry,0,0)^T. The sphere is approximated by triangular elements with a side length of 316 km, 158 km and 79 km.The error between the exact and analytic solution is evaluated in the H^1-norm and L^2-norm. We present the numerical results in Table <ref>. According to Theorem <ref> and Theorem <ref> the L2-error converges with quadratic order, O(h^2), and the H^1-norm converges with first order, O(h).§ CONCLUSIONThe paper presents an analysisof a vector-valued Crouzeix-Raviart element for the Laplace problem. The vector-valued finite element is H^1(Γ) nonconforming, meaning continuity is required only on the edge-midpoint of an element. The considered discrete problem introduces two nonconformities compared to the original formulation (<ref>), namely thegeometricerror Γ_h ≠Γ and tangential inconsistency _h ≠. For this setup we show optimal error bounds in the H^1-norm and L^2-norm. Numerical experiments confirm the theoretical derivation. In an ongoing research project the approach is used to investigate an vector-valuedelasticity problem on the surface.plain10Barrett2016 J. W. Barrett, H. Garcke, and R. Nürnberg, A stable numerical method for the dynamics of fluidic membranes, Numerische Mathematik, 134 (2016), pp. 783–822, <https://doi.org/10.1007/s00211-015-0787-5>.Bonito2020e A. Bonito, A. Demlow, and M. Licht, A Divergence-Conforming Finite Element Method for the Surface Stokes Equation, SIAM Journal on Numerical Analysis, 58 (2020), pp. 2764–2798, <https://doi.org/10.1137/19M1284592>.BONITO20201 A. Bonito, A. Demlow, and R. H. Nochetto, Chapter 1 - Finite element methods for the Laplace–Beltrami operator, in Geometric Partial Differential Equations - Part I, A. Bonito and R. H. Nochetto, eds., vol. 21 of Handbook of Numerical Analysis, Elsevier, 2020, pp. 1–103, <https://doi.org/10.1016/bs.hna.2019.06.002>.Brandner2022 P. Brandner, T. Jankuhn, S. Praetorius, A. Reusken, and A. Voigt, Finite Element Discretization Methods for Velocity-Pressure and Stream Function Formulations of Surface Stokes Equations, SIAM Journal on Scientific Computing, 44 (2022), pp. A1807–A1832, <https://doi.org/10.1137/21M1403126>.Brandner2022b P. Brandner and A. Reusken, Finite element error analysis of surface Stokes equations in stream function formulation, ESAIM M2AN, 54 (2020), pp. 2069–2097, <https://doi.org/10.1051/m2an/2020044>.COMBLEN2009 R. Comblen, S. Legrand, E. Deleersnijder, and V. Legat, A finite element method for solving the shallow water equations on the sphere, Ocean Modelling, 28 (2009), pp. 12–23, <https://doi.org/10.1016/j.ocemod.2008.05.004>.Crouzeix1974 M. Crouzeix and P.-A. Raviart, Conforming and nonconforming finite element methods for solving the stationary Stokes equations. I, Rev. Franc. Automat. Inform. Rech. Operat., R, 7 (1974), pp. 33–76.demlow2023tangential A. Demlow and M. Neilan, A tangential and penalty-free finite element method for the surface Stokes problem, 2023, <https://arxiv.org/abs/2307.01435>.dziuk_elliott_2013 G. Dziuk and C. M. Elliott, Finite element methods for surface PDEs, Acta Numerica, 22 (2013), p. 289–396, <https://doi.org/10.1017/S0962492913000056>.Gross2018 S. Gross, T. Jankuhn, M. A. Olshanskii, and A. Reusken, A Trace Finite Element Method for Vector-Laplacians on Surfaces, SIAM Journal on Numerical Analysis, 56 (2018), pp. 2406–2429, <https://doi.org/10.1137/17M1146038>.Guo2020 H. Guo, Surface Crouzeix–Raviart element for the Laplace–Beltrami equation, Numer. Math., 144 (2020), pp. 527–551.HansboLarson2020 P. Hansbo, M. Larson, and K. Larsson, Analysis of finite element methods for vector Laplacians on surfaces, IMA Journal of Numerical Analysis, 40 (2019), pp. 1652–1701, <https://doi.org/10.1093/imanum/drz018>.Hansbo2017 P. Hansbo and M. G. Larson, A stabilized finite element method for the Darcy problem on surfaces, IMA Journal of Numerical Analysis, 37 (2016), pp. 1274–1299, <https://doi.org/10.1093/imanum/drw041>.hardering2023 H. Hardering and P. Praetorius, A Parametric Finite-Element Discretization of the Surface Stokes Equations, 2023, <https://arxiv.org/abs/2309.00931>.Hardering2022 H. Hardering and S. Praetorius, Tangential errors of tensor surface finite elements, IMA Journal of Numerical Analysis, 43 (2022), pp. 1543–1585, <https://doi.org/10.1093/imanum/drac015>.Holst2012 M. Holst and A. Stern, Geometric variational crimes: Hilbert complexes, finite element exterior calculus, and problems on hypersurfaces, Foundations of Computational Mathematics, 12 (2012), pp. 263–293, <https://doi.org/10.1007/s10208-012-9119-7>.Jankuhn2021 J. Jankuhn, M. A. Olshanskii, A. Reusken, and A. Zhiliakov, Error analysis of higher order Trace Finite Element Methods for the surface Stokes equation, Journal of Numerical Mathematics, 29 (2021), pp. 245–267, <https://doi.org/10.1515/jnma-2020-0017>.Jahnkuhn2020 T. Jankuhn and A. Reusken, Trace finite element methods for surface vector-Laplace equations, IMA Journal of Numerical Analysis, 41 (2020), pp. 48–83, <https://doi.org/10.1093/imanum/drz062>.JUNGCLAUS_2020 J. Jungclaus, S. J. Lorenz, H. Schmidt, V. Brovkin, N. Brüggemann, F. Chegini, V. Gayler, M. Giorgetta, O. Gutjahr, H. Haak, S. Hagemann, M. Hanke, T. Ilyina, P. Korn, J. Kröger, L. Linardakis, C. Mehlmann, U. Mikolajewicz, W. Müller, D. Notz, H. Pohlmann, D. Putrasahan, T. Raddatz, L. Ramme, R. Redler, C. Reick, T. Riddick, T. Sam, R. Schneck, R. Schnur, M. Schupfner, J.-S. von Storch, F. Wachsmann, K.-H. Wieners, B. Stevens, J. Marotzke, and M. Claussen, The ICON Earth System Model Version 1.0, Journal of Advances in Modelling Earth Systems,(2021), <https://doi.org/https://doi.org/10.1029/2021MS002813>.Larsson2013 K. Larsson and M. Larson, A continuous/discontinuous Galerkin method and a priori error estimates for the biharmonic problem on surfaces, Math. Comput., 86 (2013), pp. 2613–2649.Lederer2019 P. Lederer, L. C., and J. Schöberl, Divergence‐free tangential finite element methods for incompressible flows on surfaces, International Journal for Numerical Methods in Engineering, 121 (2019), pp. 2503 – 2533, <https://api.semanticscholar.org/CorpusID:202572792>.Mehlmann2023Pamm C. Mehlmann, Surface Crouzeix-Raviart element for the Bochner Laplacian equation, PAMM, n/a, p. e202300207, <https://doi.org/10.1002/pamm.202300207>.Mehlmannetal2021 C. Mehlmann, S. Danilov, M. Losch, J. F. Lemieux, N. Hutter, T. Richter, P. Blain, E. C. Hunke, and P. Korn, Simulating Linear Kinematic Features in Viscous-Plastic Sea Ice models on quadrilateral and triangular Grids With Different Variable Staggering, Journal of Advances in Modeling Earth Systems, 13 (2021), p. e2021MS002523, <https://doi.org/10.1029/2021MS00252>.MehlmannGutjahr2022 C. Mehlmann and O. Gutjahr, Discretization of Sea Ice Dynamics in the Tangent Plane to the Sphere by a CD-Grid-Type Finite Element, Journal of Advances in Modeling Earth Systems, 14 (2022), p. e2022MS003010, <https://doi.org/10.1029/2022MS003010>.Mehlmann2021 C. Mehlmann and P. Korn, Sea-ice dynamics on triangular grids, J. Comput. Phys., 428 (2021), p. 110086, <https://doi.org/10.1016/j.jcp.2020.110086>.Nitschke2012 I. Nitschke, A. Voigt, and J. Wensch, A finite element approach to incompressible two-phase flow on manifolds, Journal of Fluid Mechanics, 708 (2012), pp. 418–438, <https://doi.org/10.1017/jfm.2012.317>.Olshanskii2019 M. Olshanskii, A. Reusken, and A. Zhiliakov, Inf-sup stability of the trace P2-P1 Taylor-Hood elements for surface PDEs, Mathematical Models and Methods in Applied Sciences, 32 (2022), pp. 2817–2852.Olshanskii2018 M. A. Olshanskii, A. Quaini, A. Reusken, and V. Yushutin, A Finite Element Method for the Surface Stokes Problem, SIAM Journal on Scientific Computing, 40 (2018), pp. A2492–A2518, <https://doi.org/10.1137/18M1166183>.Reuter2018 S. Reuther and A. Voigt, Erratum: The Interplay of Curvature and Vortices in Flow on Curved Surfaces, Multiscale Modeling & Simulation, 16 (2018), pp. 1448–1453, <https://doi.org/10.1137/18M1176464>. | http://arxiv.org/abs/2312.16541v1 | {
"authors": [
"Carolin Mehlmann"
],
"categories": [
"math.NA",
"cs.NA"
],
"primary_category": "math.NA",
"published": "20231227115046",
"title": "Analysis of a nonconforming finite element method for vector-valued Laplacians on the surface"
} |
apsrev4-1These authors are co-first authors of the article.Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing 100875, ChinaThese authors are co-first authors of the article.State Key Laboratory for Magnetism, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, ChinaState Key Laboratory for Magnetism, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing 100875, China Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing 100875, China Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing 100875, China Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing 100875, China Corresponding authors.State Key Laboratory for Magnetism, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Corresponding authors. Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing 100875, China The magnetic Weyl semimetal Co_3Sn_2S_2 is extensively investigated due to its giant anomalous Hall effect (AHE). Recent studies demonstrate that the AHE can be effectively tuned by multi-electron Ni doping. To reveal the underlying mechanism of this significant manipulation, it is crucial to explore the band structure modification caused by Ni doping. Here, we study the electrodynamics of both pristine and Ni-doped Co_3-xNi_xSn_2S_2 with x = 0, 0.11 and 0.17 by infrared spectroscopy. We find that the inverted energy gap around the Fermi level (E_F) gets smaller at x = 0.11, which is supposed to enhance the Berry curvature and therefore increase the AHE. Then E_F moves out of this gap at x = 0.17. Additionally, the low temperature carrier density is demonstrated to increase monotonically upon doping, which is different from previous Hall measurement results. We also observe the evidences of band broadening and exotic changes of high-energy interband transitions caused by doping. Our results provide detailed information about the band structure of Co_3-xNi_xSn_2S_2 at different doping levels, which will help to guide further studies on the chemical tuning of AHE. Optical probe on doping modulation of magnetic Weyl semimetal Co_3Sn_2S_2 R. Y. Chen January 14, 2024 =========================================================================§ INTRODUCTIONThe anomalous Hall effect (AHE) was initially discovered in ferromagnetic materials with a magnitude proportional to the magnetisation<cit.>. For a very long time, AHE was considered as a unique property of time-symmetry-breaking systems with a net magnetisation, whose origination seemed too complicated to be clearly revealed. In 1980s, the development of Berry phase theory has brought a breakthrough in understanding the physical mechanism of AHE<cit.>, which substantially advanced our perspectives of this phenomenon. Nowadays, it is generally believed that AHE can be generated from two different mechanisms: the extrinsic mechanism caused by the scattering effect (skew-scattering<cit.> and side-jump<cit.>) and the intrinsic mechanism related to Berry curvature<cit.>. Among them, the intrinsic contribution is directly related to the topological properties of the Bloch state. Therefore, the anomalous Hall conductivity (AHC) depends only on the band structure of the ideal crystal lattice, which can be calculated directly by the Kubo formula<cit.>.A great number of materials have been reported to cast giant intrinsic AHE, such as Kagome metal Nd_3Al<cit.>, Weyl semimetal Mn_3Sn<cit.>, Dirac semimetal Fe_3Sn_2<cit.>, topological insulator MnBi_2Te_4<cit.>, etc. Among them, Weyl semimetals are of special interest as the Berry curvature near the Weyl points is inherently divergent, which is supposed to contribute a large AHC when they are near the Fermi surface<cit.>. Moreover, the AHC generated by the topologically protected Weyl point is rather robust to perturbations such as lattice distortion and chemical substitution<cit.>, which is a vital advantage for developing next-generation spintronic devices.Co_3Sn_2S_2 is a typical magnetic Weyl semimetal with an AHC up to ∼1130 Ω^-1cm^-1 and anomalous Hall Angle of ∼20%. Previous studies have shown that its large AHE is dominated by the divergent Berry curvature near the Weyl point at ∼60 meV above the Fermi level (E_F)<cit.>. Subsequently, attempts were made to modulate the AHC of the material by doping holes or electrons in order to fine-tune the relative position of the E_F and the Weyl point<cit.>. Thereinto, Shen et al. have succeeded in elevating the AHC of Co_3Sn_2S_2 by multi-electron Ni doping, which reaches a maximum in Co_3-xNi_xSn_2S_2 when x = 0.11 (∼1380 Ω^-1cm^-1). It is suggested by the authors that this abnormal enhancement is mainly generated by intrinsic contributions, due to the modulated electronic structure by the local disorder effect of the doped atoms, based on theoretical calculations<cit.>. Therefore, it will be very illuminating to investigate what really happens to the electron band structure of Co_3Sn_2S_2 when doped with Ni. Lohani et al. have performed angle-resolved photoemission spectroscopy (ARPES) measurements on the significantly doped Co_3-xNi_xSn_2S_2 with x = 0.6, which reveals the shift of several bands compared to the pristine compound, and the emergence of an extra electron pocket near E_F that is occupied by added electrons.<cit.>.Here, we study the band structure evolution of Co_3-xNi_xSn_2S_2 with x = 0, 0.11 and 0.17 at different temperatures by infrared spectroscopy, which is another important technique to explore the band structure near the E_F<cit.>. We find that the interband transition peak associated with the inverted energy gap near the Weyl points shows a red shift with a small amount of Ni doping, but disappears completely with an excessive amount of Ni doping. This is consistent with the theoretical calculation of previous report, and possibly responsible for the significant enhancement of the AHE. We also observe that the total plasma frequency increases significantly with the increase of Ni concentration, which is different with the results of Hall measurement<cit.>. Our work provides detailed information towards revealing the underlying mechanism of the tuning of AHE by chemical doping.§ EXPERIMENTAL TECHNIQUESThree Co_3-xNi_xSn_2S_2 single crystals with nominal concentrations of x = 0, 0.11 and 0.17 were synthesized by the method of Sn and Pb mixed flux growth<cit.>. Infrared spectroscopic studies were performed with the Fourier transform infrared spectrometer Bruker 80V in the frequency range from 50 to 40 000 over three samples growing shiny surfaces. As for the measurement of the frequency dependent reflectivity R(ω), in-situ gold and aluminum overcoating techniques were employed to eliminate the effect of microscopic surface texture of single crystal compounds to obtain the absolute value of reflectivity<cit.>. The real part of the optical conductivity σ_1(ω) is derived from the Kramers-Kronig transformation of the reflectivity R(ω), which is extrapolated to zero at the low frequency by the Hagen-Rubens relation and to high frequency by the x-ray atomic scattering function<cit.>. Since the experimental data of the reflectivity spectra were measured up to 40 000 in the ultraviolet region, the high frequency extrapolation has little influence on the low frequency behaviour of the real part of conductivity via Kramers-Kronig transformation.§ RESULTS AND DISCUSSION The reflectivity R(ω) and the real part of optical conductivity spectra σ_1(ω) of Co_3-xNi_xSn_2S_2 (x = 0, 0.11 and 0.17) measured at different temperatures are shown in Fig. <ref>.As can be seen in the six insets, the overall profiles of the R(ω) and σ_1(ω) spectra of the three samples are very similar to each other, especially at high energies above 1000 . This implies that the doping of Ni only modifies the band structure in a very mild way. It is worth mentioning that the spectra of the pristine Co_3Sn_2S_2 are consistent with the earlier reports<cit.>, which demonstrate the following main characters: (1) At low frequencies, the R(ω) spectra approach to unit at zero frequency and increase as the temperature decreases, indicative of a metallic response. The σ_1(ω) spectra display associated Drude features around zero energy. (2) Two infrared-active phonon signals show up at 160 and 370 , respectively. (3) The R(ω) spectra exhibit a broad absorption structure around 200-300 , which corresponds to a Lorentz-type peak in the optical conductivity, as donated by the red arrows in Fig. <ref>(a) and (d). This peak is thoroughly discussed in previous reports and is ascribed to the interband transition across the inverted band gap close to the Weyl nodes around the E_F<cit.>. (4) Two Lorentz-type peaks appear at around 1900 and 5000 , above which the spectra overlap with each other at different temperatures.Upon doping, the first two of these characters stays almost unchanged. Particularly, the stability of the phonon frequencies indicates that the lattice structure is quite robust against doping. The most remarkable variation is observed at the low energy region, as can be seen in the main panels of Fig. <ref>. The absorption feature in R(ω) of the pristine compound obviously weakens in Co_3-xNi_xSn_2S_2 with x = 0.11, and the corresponding Lorentz peak shifts to lower energies, as indicated by the red arrows in the main panels of Fig. <ref>(b) and (e). The weakening and red shifting of this peakindicate the narrowing of the inverted band gap, which agrees well withtheoretical calculations<cit.>. With further doping of x = 0.17, the absorption structure in R(ω) and the associated Lorentz-type peak are completely out of sight, as can be seen in Fig. <ref>(c) and (f). There are two possible explanations for these phenomena: one is that the inverted gap is totally closed, and the other one is that the E_F simply moves out of this gap. We will revisit this issue later and discuss it in more detail.In order to capture the delicate modification of band structure by doping, we use the Drude-Lorentz model to decompose the optical conductivity σ_1(ω). The dielectric function of the Drude-Lorentz model can be expressed asε( ω) =ε _∞-∑_s^ω _ps^2/ω ^2+iωτ _Ds+∑_j^S_j^2/ω _j^2-ω ^2-iωτ _j.Where ε_∞ is the dielectric constant at high energy; the middle term is the Drude component, which describes the electrodynamics of itinerant carriers; and the last term is the Lorentz component that characterizes the excitation of the energy gap or interband transition. The fitting results of σ_1(ω) at 10 K and 300 K for all three samples are shown in the main panels and insets of Fig. <ref>(a)-(c), respectively. The specific fitting parameters below 3000 are shown in Table <ref>. It is worth noting that at room temperature the optical conductivity can be well reproduced by one Drude and one Lorentz term below 3000 for all three samples. By contrast, the spectra become more complicated and more Drude/Lorentz terms are required at low temperatures. The pristine Co_3Sn_2S_2 compound experiences a ferromagnetic phase transition at T_C = 175 K, below which the Drude peak becomes much sharper and extra Lorentz peaks show up in the σ_1(ω) spectra. In previous infrared studies, Yang et al. described the emergent feature with one Lorentz peak<cit.>, whereas Xu et al. suggested multiple Lorentz peaks<cit.>. Here, we find our data could be well reproduced by two Lorentz peaks, which will be labeled as Lorentz1 and Lorentz2 in the following text. According to previous reports, these two peaks are ascribed to interband transition associated with the inverted band gap close to the Weyl nodes, which locate at 316 and 861 at 10 K, respectively. As can be seen from Table <ref>, the Ni doping of x = 0.11 causes the shift of Lorentz1 and Lorentz2 to 252 and 696 . This infers that the band gap of the interband transition related to the Weyl nodes is quantitatively reduced. As a result, the integrated Berry curvature is expected to be elevated, and hence leads to the enhancement of the AHE. With further doping of x = 0.17, we find that an additional Drude term is needed to well reproduce the σ_1(ω) at low temperatures, instead of two Lorentz peaks as in Co_3Sn_2S_2 and Co_2.89Ni_0.11Sn_2S_2. It seems that there is a fundamental change to the band structures caused by the excessive mount of Ni doping. As mentioned earlier, one possible scenario is that the inverted energy gaps near the E_F are fully closed. However, considering that neither the lattice structure nor the ferromagnetic phase transition are noticeably modified by doping, we believe that the inverted band gaps will survive as well, which are guaranteed by spin-orbital coupling. Therefore, it is more likely that the E_F shifts out of the inverted band gap and crosses with the initial conduction band. Consequently, some interband transitions disappear from the σ_1(ω) spectra and extra intraband transitions emerge, which agrees perfectly with our results. Moreover, this scenario is also consistent with the theoretical prediction that E_F is pushed upwards upon doping, and the ARPES results that extra electron pocket appears near E_F, occupied by added electrons. In this case, the AHE is not as large as when the E_F is inside the inverted gap.The low-energy Drude component represents the response of free carriers, which becomes narrower with temperature decreasing for all three samples, showing good metallicity. In order to extract the doping effect, we plot the optical conductivity spectra at 10 K in Fig. <ref>(d). It is clearly seen that the Drude peak broadens with increasing of the doping level. For Co_3Sn_2S_2 and Co_2.89Ni_0.11Sn_2S_2, the low energy part can be well fitted by only one Drude component, whereas two of them are required to fit the low frequency part of Co_2.83Ni_0.17Sn_2S_2. The two Drude terms represent free carriers from different Fermi surfaces, one of which gradually emerges below T_C. As shown in Table <ref>, the scattering rate of the first Drude component of Co_2.83Ni_0.17Sn_2S_2 (38 ) is actually comparable to that of Co_3Sn_2S_2 (42 ) and Co_2.89Ni_0.11Sn_2S_2 (54 ), which infers that the disorder effect on the corresponding conduction band is negligible. Meanwhile, the scattering rate of the second Drude term of Co_2.83Ni_0.17Sn_2S_2 is much larger, which is absent in Co_3Sn_2S_2 and Co_2.89Ni_0.11Sn_2S_2, hence contributes a large portion of extra itinerant carriers. The carrier density n can be reflected by the plasma frequency ω _p^2 = 4 π n e^ 2/ m^*, where m^* is the effective mass of electrons. The overall plasma frequency for two Drude components can be extracted from ω _p=( ω _p1^2.+. ω _p2^2) ^1/2 . As shown in Fig. <ref>, ω _p of the undoped Co_3Sn_2S_2 decreases abruptly by entering the FM phase, due to the opening of energy gaps. Although the FM phase transition is barely affected by doping, this sudden decrease of ω _p becomes less obvious in Co_2.89Ni_0.11Sn_2S_2 and totally disappears in Co_2.83Ni_0.17Sn_2S_2, which resembles the evolvement of Lorentz1 and Lorentz2. On the other hand, with the increase of Ni concentration, the total plasma frequency at 10 K is substantially enhanced, especially when x = 0.17. Assuming m^* is a constant, the carrier density n is supposed to exhibit a similar trend. Note that the Hall effect measurements of Co_3-xNi_xSn_2S_2 demonstrate that the carrier concentration n first decreases monotonically with Ni doping until x = 0.15, then it increases slightly up to x = 0.17. The enhancement of the AHE is therefore believed to be accompanied by a decrease in the carrier density<cit.>. This disagreement could be explained by the technical differences between Hall effect and infrared spectroscopy. It is well known that in multi-band materials with both electron and hole Fermi pockets, the Hall measurement might underestimate the overall carrier density, while infrared spectroscopy generally reflects the contribution from both types of carriers. Therefore, we believe that the overall carrier density n actually increases with doping, which provide a new piece of puzzle toward thoroughly understanding the chemical tuning of AHE.At last, we want to discuss the band structure modification induced by doping in a wider energy range. To observe the shift of peaks more clearly, we draw the conductivity spectra of three samples at 10 K in a wide frequency range up to 8000 in Fig. <ref>, where the spectra are shifted vertically for comparison. For the pristine compound, two prominent peaks at around 1900 and 5000 could be resolved above 1000 , which are labeled as Inter1 and Inter2. Inter1 is only noticeable below T_C, and the spectra around the same energies are quite flat at higher temperatures, as shown in Fig. <ref>(d). Similar behaviors are observed in the doped compounds as well, as seen in Fig. <ref>(e) and (f), indicating an identical origination. Compared to the undoped compound, Inter1 is almost unchanged at the doping level of x = 0.11 at 10 K, but becomes less sharp at x = 0.17. This might be caused by the broadening of the associated valence and conduction bands, which is a natural consequence of local disorder effect introduced by doping. Remarkably, band broadening is also believed to be responsible for the narrowing of the inverted band gaps (Lorentz1 and Lorentz2), which is crucial to the enhancement of the AHE<cit.>. However, the peak position of Inter1 moves slightly to higher energy when the broadening effect is most obvious, which seems to be contradictory to the expected band gap narrowing behavior. This could be explained by the difference between the lowest gap energy and the energy with the most interband transition spectral weight between the valence and conduction bands. The former is determined by the lowest energy of the interband transition peak, whereas the latter one is identified as the central peak position. For Inter1, it is very likely that the corresponding band gap gets smaller upon doping, although the peak position shows a blue shift, due to the redistribution of the joint density of states. As for Inter2, the peak position of about 5000 is smaller than the corresponding theoretical value, which is attributed to the electron correlation by former studies<cit.>. Moreover, the correlation strength could be estimated by the ratio between peak positions of experimental and theoretical values. Therefore, the peak position of inter2 could serve as a measurement of electron correlation. As can be seen in Fig. <ref>, Inter2 shifts monotonically to lower energies upon doping, which seems to infer a stronger electron correlation. However, the increase of carrier density by doping usually reduces the correlation strength due to the enhancement of screening effect. Therefore, it is hard to believed that the increase of electron correlation is responsible for the red-shift of Inter2. In addition, the steepness of the left side of this peak is considered to be related to the correlation strength as well, which almost stays unchanged upon doping. Taking all the above results into consideration, we believe that the doping of Ni has affected the high-energy band structure as well, which causes some of the occupied and unoccupied bands getting closer to each other. More advanced theoretical calculation techniques are required to reveal the exact band structures modified by doping. § CONCLUSIONIn summary, we systematically study the optical spectroscopy of Co_3-xNi_xSn_2S_2 crystals at x = 0, 0.11 and 0.17. We find that the interband transition peaks associated with the inverted energy gaps close to the Weyl nodes get smaller with Ni doping of x = 0.11, but disappear completely with x = 0.17. Considering that an extra Drude component shows up in the x = 0.17 compound, we deduce that the E_F shifts out of the inverted band gap in this system. We also observe the evidence of band broadening, which might be related to the narrowing of the inverted band gap. These results are consistent with previous theoretical calculation, which are essential to the abnormal enhancement of the AHE. On the other hand, the low temperature plasma frequency increases monotonically with the increase of Ni concentration, indicating enhancement of carrier density, which is different from Hall measurement results. In addition, we also discover that interband transition peak at around 5000 shifts to lower energy upon doping, of which the underlying mechanism is unknown yet. Our results not only provide experimental evidence of band structure modification that is crucial to the enhancement of AHE, but also offer new insights about the chemical tuning of AHE of topological materials. § ACKNOWLEDGEMENTSThis work was supported by the National Key Projects for Research and Development of China (Grant No. 2021YFA1400400, and 2022YFA1403800), and the National Natural Science Foundation of China (Grant No. 12074042), and the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (CAS) (XDB33000000), and the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 11704033). | http://arxiv.org/abs/2312.16437v3 | {
"authors": [
"L. Wang",
"S. Zhang",
"B. B. Wang",
"B. X. Gao",
"L. Y. Cao",
"X. T. Zhang",
"X. Y. Zhang",
"E. K. Liu",
"R. Y. Chen"
],
"categories": [
"cond-mat.mes-hall",
"cond-mat.str-el"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231227065541",
"title": "Optical probe on doping modulation of magnetic Weyl semimetal Co$_3$Sn$_2$S$_2$"
} |
: A Dataset and Model for Advancing Robust Semantic Segmentation in Construction Environments Maghsood Salimi School of Innovation, Design and Engineering, Mälardalen University, Sweden Mohammad Loni Future Solutions Department, Volvo Construction Equipment, Sweden Sara AfsharFuture Solutions Department, Volvo Construction Equipment, Sweden Marjan SirjaniSchool of Innovation, Design and Engineering, Mälardalen University, Sweden Antonio CicchettiSchool of Innovation, Design and Engineering, Mälardalen University, Sweden ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================The increasing demand for autonomous machines in construction environments necessitates the development of robust object detection algorithms that can perform effectively across various weather and environmental conditions. This paper introduces a new semantic segmentation dataset specifically tailored for construction sites, taking into account the diverse challenges posed by adverse weather and environmental conditions. The dataset is designed to enhance the training and evaluation of object detection models, fostering their adaptability and reliability in real-world construction applications. Our dataset comprises annotated images captured under a wide range of different weather conditions, including but not limited to sunny days, rainy periods, foggy atmospheres, and low-light situations. Additionally, environmental factors such as the existence of dirt/mud on the camera lens are integrated into the dataset through actual captures and synthetic generation to simulate the complex conditions prevalent in construction sites. We also generate synthetic images of the annotations including precise semantic segmentation masks for various objects commonly found in construction environments, such as wheel loader machines, personnel, cars, and structural elements. To demonstrate the dataset's utility, we evaluate state-of-the-art object detection algorithms on our proposed benchmark. The results highlight the dataset's success in adversarial training models across diverse conditions, showcasing its efficacy compared to existing datasets that lack such environmental variability. § INTRODUCTIONThe growing need for autonomous machines in construction environments underscores the importance of advancing resilient object detection algorithms <cit.>.To ensure the applicability of the object detection model to unforeseen situations, the training dataset should include diverse and representative data encompassing various weather and lighting conditions <cit.>.Meeting this demand is crucial for enhancing the reliability of autonomous driving in the heavy machinery industry. However, the scarcity of sufficient training data including diverse environmental conditions from construction environments hinders the ability of object detection models to generalize effectively.In addition, it can exacerbate issues like overfitting, where models become overly specialized to the limited data available.Addressing the problem of inadequate training data is pivotal for developing robust and versatile object detection models capable of meeting the demands of complex and dynamic environments.To address these challenges, this paper introduces a large-scale semantic segmentation dataset designed exclusively for construction sites, considering the varied challenges presented by diverse weather conditions and situations occurring during construction operations.We call this dataset robust semantic segmentation in construction environments, or in short: .This dataset is designed to enhance the training and evaluation of semantic segmentation models, fostering their adaptability and reliability in real-world construction applications. Contributions.comprises annotated images captured under a wide range of weather conditions, including but not limited to sunny days, rainy periods, foggy atmospheres, and low-light situations.Furthermore, environmental factors such as dust, and the existence of dirt or mud on the camera lens are systematically integrated into the dataset to simulate the complex conditions prevalent in construction environments.The annotations include precise semantic segmentation masks for objects commonly found in construction environments, such as wheel loaders, personnel, cars, and structural elements.We augment our dataset by (i) applying the blurring effect; (ii) adding noise; and (iii) removing random regions through a cut-out process simulating the presence of dirt or mud on the camera lens.Finally, employing a monocular depth estimation model, we provide a depth map for every image in our dataset (Section <ref>).To demonstrate the utility of our dataset, we evaluate two state-of-the-practice and state-of-the-art object detection algorithms, U-Net <cit.> and SegFormer <cit.>, by training on thedataset.According to our experiments, the test accuracy of U-Net and SegFormer trained on the non-augmenteddataset is up to 55.48% and 82.27%, respectively.The dataset and the code for training models and data augmentation are made available on the GitHub repository through: https://github.com/RobustInsight/ConstScene/https://github.com/RobustInsight/ConstScene/. § RELATED WORKTo the best of our knowledge,is the first high-resolution public semantic segmentation dataset captured from a construction environment considering a wide range of objects and environmental conditions. This study also sheds light on the impact of data augmentation on the segmentation model's effectiveness in handling perturbed scenes. On-Road Vision Datasets. Several research works have introduced vision datasets designed for applications in on-road autonomous driving <cit.>. While earlier studies have achieved success in collecting extensive datasets with significant variations, they overlooked the inclusion of construction sites.In other words, they assumed that autonomous machines would not navigate through construction areas.Off-Road Vision Datasets. <cit.> focused on guardrail detection in construction sites using a VGG16 model trained on a synthetic dataset containing 6,000 images.<cit.> collected images of five different construction machines, including loaders, bulldozers, excavators, backhoe diggers, and rollers. The MOCS dataset <cit.> gathered more than 40,000 images from various construction sites for the annotation of 13 distinct types of moving objects. Employing pixel segmentation, they annotated the objects and evaluated their performance on various Convolutional Neural Networks (CNN). ACID <cit.> collected 10,000 construction machine images for testing various object detection algorithms. SODA dataset <cit.> is a comprehensive object detection dataset for construction sites. SODA was released with more than 20k images encompassing four object categories including worker, material, machine, and layout. Table <ref> compareswith off-road autonomous driving datasets.Compared to prior studies,for the first to address semantic segmentation in challenging environments, such as rain, dust, and fog. Plus, we employ data augmentation techniques to robustify the model training against perturbed data. Notably, our dataset is openly accessible, ensuring result reproducibility. This facilitates researchers to focus on introducing novel detection techniques rather than repeating the time-consuming data collection process.§ DATA COLLECTION AND PREPARATION §.§ Data Collection and Sensor SetupThe selection of objects in image datasets is vital for training models to precisely recognize elements such as pedestrians, machines, and piles, ultimately enhancing the reliability and decision-making capabilities of self-driving machines in diverse and dynamic construction environments.In this paper, we try to include commonly used objects in construction environments, including construction machines (wheel-loaders), crushers, humans, piles, roads, and background regions. The image data collection process poses several challenges, including variability in lighting and weather conditions, diverse backgrounds, occlusions, and the need to include authentic situations such as existing dust on the camera lens or workers climbing the ladder of a heavy machine.To address these challenges, we carefully consider the selection of representative samples from diverse environments to ensure model generalization. Plus, we employ data augmentation (Section <ref>), where images are artificially manipulated to simulate variations in weather conditions. Occlusion challenges are mitigated by collecting images from multiple angles, while meticulous labeling and annotation processes are undertaken to provide the necessary ground truth for training robust models.For collecting image data, we utilized an RGB camera featuring a 1280×720 resolution, capturing images in JPEG format at a rate of 15 Frame-Per-Second.§.§ Data AnnotationWe leverage Roboflow <cit.> tool to annotate images. The tool allows us to label objects with semantic segmentation masks, enabling precise annotations for object classes.For each image in the dataset, a single Portable Network Graphics (PNG) mask is generated to show the segmented regions corresponding to different classes, providing a representation of the pixel-level annotations.After labeling the whole dataset, images were randomly divided into three categories train, validation, and test.§.§ Data AugmentationImage augmentation plays a crucial role in enhancing the robustness and generalization capabilities of semantic segmentation models <cit.>.By diversifying the training dataset through changes in contrast, and/or adding perturbations using noise, ultimately we improve the model's ability to perform well on unseen or slightly different data during testing.In this paper, we leverage three data augmentation techniques including (i) blurring images by taking 25 neighbor pixels and averaging them to perturb images that are not visible to the human eye; (ii) adding pepper-and-salt noise to 1% of pixels; and (iii) removing eight random regions with the size of 3% of image size through a cut-out process for simulating dirty and muddy camera lens.The three first augmentation techniques have been applied to all images as shown in multiple samples in Fig. <ref>.§.§ Depth EstimationDepth estimation is the task of determining the distance from the camera to objects in a scene <cit.>. Depth estimation is crucial for various applications, such as robotics, autonomous machines, and augmented/virtual reality. Given the significance of depth estimation in autonomous driving, we provide depth maps of images in the dataset using a monocular depth estimation method <cit.>. Fig. <ref> shows multiple RGB samples ofwith corresponding depth maps across diverse weather and working conditions.The red circles in Fig. <ref>.c to Fig. <ref>.e mark the areas of disparity with low-confidence matches for conditions with the presence of fog, dust, and dirt on the camera lens.Consequently, the impact of adverse weather conditions significantly diminishes the accuracy of depth estimation, emphasizing the crucial need to address this challenge in computer vision pipelines employed in construction environments.§ DATASET STATISTICS is available in two distinct versions: the original and the augmented version.The original version comprises 3470 images as training, validation, and test sets, while the augmented version consists of 6240 images.The partition ratio of the training, validation, and test sets for the original dataset are 80%, 10%, and 10%, respectively.On the other hand, for the augmented version, these sets were allocated at proportions of 90%, 5%, and 5%.Fig. <ref> illustrates the class distribution in the dataset, indicating the number of images per class across the training, validation, and test sets in the original dataset.§ EXPERIMENTAL RESULTS §.§ Evaluation Metrics We use mean intersection over union (mIoU) as the evaluation metric for assessing predictions in our experiments.The Mean Intersection over Union (mIoU) used in semantic segmentation is defined as mIoU = 1/N∑_i=1^N|X_i ∩ Y_i|/|X_i ∪ Y_i|, where N, X, and Y are the total number of classes, the set of pixels predicted to belong to class i, and the set of pixels in the ground truth that belong to class i, respectively. §.§ Experimental Setup Training Dataset. We conduct evaluations on two versions of : (i) D1, representing the original dataset without any data augmentation; and (ii) D2, denoting the original dataset incorporating the first three augmentation techniques (blurring + pepper and salt noise + regions removal). Model Details. For evaluation, we leverage U-Net model <cit.> with ResNet18 and ResNet50 <cit.> encoder architectures; and SegFormer <cit.> as a state-of-the-art Vision Transformer (ViT) <cit.> model.A U-Net is a CNN architecture designed for image segmentation tasks.It features a U-shaped structure, comprising a contracting path for capturing high-level features and a corresponding expansive path for precise segmentation.Skip connections maintain spatial details.The U-Net effectively balances local and global contexts, making it versatile for various computer vision applications.In our exploration of ViT, we examined various iterations of the advanced SegFormer model <cit.>.SegFormer has two main modules: a hierarchical transformer encoder; and a lightweight multi-layer-perception decoder to predict the final segmentation mask. Training Details. Table <ref> summarizes training parameters for U-Net and SegFormer models.We utilize NVIDIA® A5000 for training U-Net and SegFormer models on .Considering sustainability, it's noteworthy to highlight the carbon emissions associated with the training process.By utilizing the machine learning CO_2 impact calculator tool [https://mlco2.github.io/impact/], the total carbon footprint for all experiments is roughly 8.65 kg CO_2.§.§ Prediction Results Analyzing Results of U-Net. Table <ref> presents the test results of U-Net with various backbone architectures trained on the originalwith and without the incorporation of data augmentation techniques.Results show that U-Net with the ResNet50 encoder trained on the non-augmented dataset (D1) provides the best results (55.48%) compared to other training settings.In addition, data augmentation (D2) provides up to 11.59% higher accuracy for U-Net with the ResNet50 encoder demonstrating the model better learns variations present in real-world data.Analyzing Results of SegFormer. Table <ref> shows the test results of SegFormer trained on .The findings reveal that SegFormer-B5, when trained on the original dataset (D1), achieves an accuracy of 82.27%, surpassing SegFormer-B5's results by an improvement of 35.3%. Analyzing Results of Data Augmentation. When SegFormer-B5 undergoes training using the dataset enhanced with data augmentation D2, it yields superior accuracy compared to the non-augmented dataset.This suggests that the blurring + pepper and salt noise + regions removal augmentation technique contributes to the model's improved ability to learn variations present in real-world data.§.§ Advantages ofin Defending Against Adversarial Attacks Adversarial attacks <cit.> involve the deliberate manipulation of input data to cause a model to produce incorrect or misleading outputs.Adversarial attacks can significantly threaten the reliability and security of machine learning systems. Challenging weather conditions and natural phenomena, such as poor lighting and shadow, are categorized as a type of black-box adversarial attack <cit.>. can be leveraged to improve the model's robustness against such adversarial attacks through the adversarial training strategy.By deliberately exposing the model to adverse weather samples during training, (i) the model learns to defend against such attacks and becomes more adept at recognizing and mitigating adversarial inputs; and (ii) the model is better equipped to handle variations and distortions in real-world data, making it more resilient against adversarial inputs.§ CONCLUSION AND FUTURE WORKWe presented a novel large-scale dataset, dubbed , for studying semantic object detection in construction environments across various weather and environmental conditions with extensive annotations for comprehensive understanding and analysis.We extend our dataset by employing data augmentation techniques aligned with environmental hazards prevalent in construction sites.We also showed the results of two popular semantic segmentation models trained on our proposed dataset.In future work, semantic segmentation in construction environments can be further improved by (i) considering objects from other construction segments such as mining; and (ii) generating synthetic data using generative adversarial networks. We hope our new findings and the proposed dataset will stimulate the development of innovative techniques to enhance the efficacy of object detection models against adversarial attacks.§.§.§ AcknowledgementsThis work was supported by Volvo Construction Equipment AB, and the Swedish research financier for universities, KK-stiftelsen, through the https://sacsys.github.io/main/SACSys project.10 ahmed2021survey Ahmed, M., Hashmi, K.A., Pagani, A., Liwicki, M., Stricker, D., Afzal, M.Z.: Survey and performance analysis of deep learning based object detection in challenging environments. Sensors21(15), 5116 (2021)alomar2023data Alomar, K., Aysel, H.I., Cai, X.: Data augmentation in classification and segmentation: A survey and new strategies. Journal of Imaging9(2), 46 (2023)behley2019semantickitti Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., Gall, J.: Semantickitti: A dataset for semantic scene understanding of lidar sequences. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 9297–9307 (2019)caesar2020nuscenes Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., Beijbom, O.: nuscenes: A multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 11621–11631 (2020)chakraborty2018adversarial Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069(2018)cordts2016cityscapes Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3213–3223 (2016)dosovitskiy2020image Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929(2020)duan2022soda Duan, R., Deng, H., Tian, M., Deng, Y., Lin, J.: Soda: A large-scale open site object detection dataset for deep learning in construction. Automation in Construction142,104499 (2022)roboflow2022 Dwyer, B., Nelson, J., Solawetz, J., et al.: Roboflow (version 1.0). Software (2022), available from <https://roboflow.com>geiger2013vision Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The kitti dataset. The International Journal of Robotics Research32(11),1231–1237 (2013)he2016deep He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)huang2018apolloscape Huang, X., Cheng, X., Geng, Q., Cao, B., Zhou, D., Wang, P., Lin, Y., Yang, R.: The apolloscape dataset for autonomous driving. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. pp. 954–960 (2018)kim2015robust Kim, J., Chi, S.: Robust real-time object detection on construction sites using integral channel features. In: Proceedings of the 6th International Conference on Construction Engineering and Project Management (ICCEPM 2015), Busan, Korea. pp. 304–309 (2015)kolar2018transfer Kolar, Z., Chen, H., Luo, X.: Transfer learning and deep convolutional neural networks for safety guardrail detection in 2d images. Automation in Construction89,58–70 (2018)loni2020densedisp Loni, M., Zoljodi, A., Maier, D., Majd, A., Daneshtalab, M., Sjödin, M., Juurlink, B., Akbari, R.: Densedisp: Resource-aware disparity map estimation by compressing siamese neural architecture. In: 2020 IEEE Congress on Evolutionary Computation (CEC). pp. 1–8. IEEE (2020)loni2021faststereonet Loni, M., Zoljodi, A., Majd, A., Ahn, B.H., Daneshtalab, M., Sjödin, M., Esmaeilzadeh, H.: Faststereonet: A fast neural architecture search for improving the inference of disparity estimation on resource-limited platforms. IEEE Transactions on Systems, Man, and Cybernetics: Systems52(8),5222–5234 (2021)meng2022survey Meng, L., Gang, Z., Yawei, Y., Jun, S.: Survey of object detection methods under adverse weather conditions. Journal of Computer Engineering & Applications58(13) (2022)ming2021deep Ming, Y., Meng, X., Fan, C., Yu, H.: Deep learning for monocular depth estimation: A review. Neurocomputing438,14–33 (2021)nath2020deep Nath, N.D., Behzadan, A.H.: Deep convolutional networks for construction object detection under different visual conditions. Frontiers in Built Environment6, 97 (2020)ranftl2020towards Ranftl, R., Lasinger, K., Hafner, D., Schindler, K., Koltun, V.: Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE transactions on pattern analysis and machine intelligence44(3),1623–1637 (2020)ronneberger2015u Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. pp. 234–241. Springer (2015)salimi2023learning Salimi, M., Loni, M., Sirjani, M.: Learning activation functions for adversarial attack resilience in cnns. In: International Conference on Artificial Intelligence and Soft Computing. pp. 203–214. Springer (2023)salimi2023saraf Salimi, M., Loni, M., Sirjani, M., Cicchetti, A., Abbaspour Asadollah, S.: Saraf: Searching for adversarial robust activation functions. In: Proceedings of the 2023 6th International Conference on Machine Vision and Applications. pp. 174–182 (2023)somua2019computer Somua-Gyimah, G., Frimpong, S., Nyaaba, W., Gbadam, E.: A computer vision system for terrain recognition and object detection tasks in mining and construction environments. In: SME annual conference (2019)sun2020scalability Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., Caine, B., et al.: Scalability in perception for autonomous driving: Waymo open dataset. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2446–2454 (2020)tajeen2014image Tajeen, H., Zhu, Z.: Image dataset development for measuring construction equipment recognition performance. Automation in Construction48,1–10 (2014)wang2022survey Wang, D., Yao, W., Jiang, T., Tang, G., Chen, X.: A survey on physical adversarial attack in computer vision. arXiv preprint arXiv:2209.14262(2022)xiao2021development Xiao, B., Kang, S.C.: Development of an image data set of construction machines for deep learning object detection. Journal of Computing in Civil Engineering35(2),05020005 (2021)xie2021segformer Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems34,12077–12090 (2021)xuehui2021dataset Xuehui, A., Li, Z., Zuguang, L., Chengzhi, W., Pengfei, L., Zhiwei, L.: Dataset and benchmark for detecting moving objects in construction sites. Automation in Construction122,103482 (2021)yu2020bdd100k Yu, F., Chen, H., Wang, X., Xian, W., Chen, Y., Liu, F., Madhavan, V., Darrell, T.: Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2636–2645 (2020)zhong2022shadows Zhong, Y., Liu, X., Zhai, D., Jiang, J., Ji, X.: Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 15345–15354 (2022) | http://arxiv.org/abs/2312.16516v1 | {
"authors": [
"Maghsood Salimi",
"Mohammad Loni",
"Sara Afshar",
"Marjan Sirjani",
"Antonio Cicchetti"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231227104919",
"title": "ConstScene: Dataset and Model for Advancing Robust Semantic Segmentation in Construction Environments"
} |
top=4cm, left=2cm, right=2cm Transverse electric waves in Bandos-Lechner-Sorokin-Townsend nonlinear electrodynamics Towe Wang January 14, 2024======================================================================================Recent experiments strongly indicate deep connections between transports of strange metal and high T_c superconductors. For example, the dependence of the zero-temperature phase stiffness on the critical superconducting temperature is generally linear, which is incompatible with the standard Bardeen-Cooper-Schrieffer description. We explore the scaling relations among superconducting critical temperature, superfluid density, and momentum dissipation (disorder) strength for the Gubser-Rocha model with extensions in the probe limit. The critical temperature is evaluated by using both the Sturm-Liouville eigenvalue method and numerical calculations. In the normal phase, we show that the critical temperature is proportional to the momentum dissipation (disorder) strength in a certain parameter range. In the superconducting phase, studying the AC conductivity analytically and numerically, we find linear dependence of zero-temperature superfluid density (phase stiffness) on the critical superconducting temperature, which is consistent with recent experiments of high T_c superconductors. These results further underpin the deep connections between strange metal and high T_c superconductors.§ INTRODUCTIONSuperconductivity is a very significant discovery in physics in the early part of the 20th century. Since its ascertainment, overwhelming experimental and theoretical research has been done on superconductivity, one of the most striking is BCS theory<cit.>. This theory assumes that the fundamental characteristics of superconductivity are caused by Cooper's pair correlation. As experiments progressed, high-temperature oxide superconductors<cit.> and iron-based superconductors<cit.> were found. The description of these superconductors is beyond the scope of BCS theory. Lately, research on disordered superconductors has shown that both T_c and superfluid density n_s decrease with disorder in a characteristic way<cit.>. In addition, the superfluid density evolves towards linear-T dependence at higher temperatures. These phenomena are different from what is expected in the BCS theory. There are a lot of interests and challenges in studying the critical temperature T_c related to the momentum dissipation strength k and superfluid density n_s. Some experimental papers <cit.> elucidate that T_c seems to be principally controlled by the superfluid density. More remarkably, the superfluid density displays a strong linear temperature dependence within a certain temperature range.In this paper, we explore the universal properties of the superconductors, e.g., scaling relations, by using the AdS/CFT correspondence in a specific holographic model. The AdS/CFT correspondence<cit.>, also known as holography or gauge/gravity duality, is a framework for tracing the behavior of strongly-coupled quantum systems in terms of a higher dimensional gravity theory. It can be a very powerful and fascinating tool to tackle strongly interacting systems by studying gravitational models. In particular, holographic techniques have been widely applied for studying condensed matter physics <cit.>. The kind of model called holographic superconductors is still an attractive topic of theoretical research. They have the potential to provide new insights into the nature of superconducting materials and could lead to the development of new technologies based on superconductivity.Based on the AdS/CFT correspondence, the first minimal bottom-up construction had been implemented by Hartnoll, Herzog, and Horowitz<cit.>. In Ref. <cit.>, the authors considered the charged complex scalar and the Maxwell field in the Schwarzschild-AdS_4 (SAdS_4) spacetime. Although this is a simple model, it can reproduce the typical behavior of the superconductor successfully. The calculation done in <cit.> was the so-called probe limit, ignoring the backreactions to the gravity sector from the matter sector. In <cit.>, the authors considered the Einstein-Maxwell theory with the complex scalar and found that the probe-limit calculation corresponds to the infinite charge case with appropriate scaling. In any case, the complete analysis of the superconducting phase requires numerical computations. Meanwhile, some analytic properties of the holographic superconductor in the superconducting phase have been studied<cit.>. Since the complete analysis requires numerical computations, they provided some approximate formulas. From then on <cit.>, many papers have appeared that attempt to apply this useful idea to investigate superconductors in different setups <cit.>.Recently, it has been recognized that the strange metallic behavior is universal in superconductors like the high-T_c cuprates. The strange metal is often described by its linear-T dependence of the resistivity<cit.>. In holographic studies, the Gubser-Rocha model<cit.> is known as one of the candidates describing the strange metallic behavior of the superconductors. This model is an Einstein-Maxwell-dilaton theory, and axion fields are often introduced to break the translational invariance<cit.>. The original model is consistent with the specific truncation of the 11-dimensional supergravity. This model has been widely utilized for studying the holographic version of the high-T_c cuprates superconductors. On the other hand, most of the results are basically obtained by numerical computations, similar to the studies in the original holographic superconductor model. It is worth investigating the analytic properties of this model. In this paper, therefore, we study some analytic properties of the holographic superconductor model based on the Gubser-Rocha model, e.g., the critical temperature and the AC conductivity, by using some approximations.The main purpose of this paper is to study the scaling relations among the superconducting critical temperature, the superfluid density, and the momentum-dissipation (disorder) strength in the holographic superconductor model based on the Gubser-Rocha model. First, we study the critical temperature by finding the onset of the charged-scalar instability. Taking advantage of the Strum-Liouville method, we investigate how the momentum-dissipation strength affects the critical temperature. The results obtained by numerical and approximate methods are in good agreement with each other. Using the approximate formula, we find the scaling relation between the critical temperature and the strength of the momentum dissipation. Next, we also study the AC conductivity in the superconducting phase. We read off the superfluid density from the AC conductivity, then investigate the relation between the zero-temperature superfluid density and the critical temperature. As a result, we find the linear scaling relation between them in a specific case, which is similar to the recent experimental result<cit.>. Lastly, we study the analytic approximate formula for the AC conductivity in the superconducting phase. The relevant equation of motion can be written as the Schrödinger-type equation in the tortoise coordinate. Using the self-consistent approach, we obtain the approximate formula for the AC conductivity. The organization of this paper is as follows.The holographic setup of our model is given in section <ref>.We investigate the critical temperature of the superconducting phase in section <ref>. In section <ref>, we turn to study the conductivity, and consider an analytic approximation for the AC conductivity in the superconducting phase. We provide conclusion and discussion in section <ref>. In Appendix <ref>, we demonstrate how the profile of the critical temperature with general q from the full analysis with backreactions agrees with the probe limit results in a large q limit. In Appendix <ref>, we show another approximate formula for the AC conductivity.§ SETUPIn this study, we employ the Gubser-Rocha model <cit.> extended with the linear axions and the charged scalar, for studying the superconducting nature <cit.>. The action of our model is given by <cit.> S = ∫ d^4 x√(-g)( ℒ_g + ℒ_m), ℒ_g = R - 1/2 (∂ϕ)^2+ 6 coshϕ/√(3) - 1/2∑ (∂ψ_I)^2, ℒ_m = - 1/4 e^ϕ/√(3) F^2 - DΦ^2 - B(ϕ) Φ^2. The action consists of two components, ℒ_g representing the Lagrangian of fields involving the gravity and ℒ_m representing the Lagrangian of matter fields. The model involves gravity fields g_μν, a dilaton ϕ, axions ψ_I, U(1) gauge fields and a complex scalar field Φ. It was shown that the presence of the dilaton field can realize the vanishing entropy at zero temperature<cit.>. The axions are originally introduced to break the translational invariance explicitly in the full analysis with backreactions <cit.>. F_μν= ∂_μ A_ν - ∂_ν A_μ is the field strength of the U(1) gauge field. The covariant derivative D is defined by D_μ:=_μ-iqA_μ. B(ϕ) is a coupling factor between dilaton field ϕ and complex scalar field Φ. We focus on the probe limit which corresponds to taking q→∞. Under the probe limit, we can consider that the gravity and matter sector are described by separately ℒ_g and ℒ_m, respectively. The equations of motion for the dilaton and the axions are obtained from ℒ_g as^2ϕ+2√(3) sinh(ϕ/√(3))=0,^2ψ_I=0,and the Einstein's equation becomesR_μν-1/2g_μν[R -1/2(∂ϕ)^2 +6cosh(ϕ/√(3))-1/2∑_I=1^2(∂ψ_I)^2] = 1/2∂_μϕ∂_νϕ+1/2∑_I=1^2(∂_μψ_I∂_νψ_I).Considering the metric ansatzds^2=-f(r)dt^2+dr^2/f(r)+h(r)(dx^2+dy^2),we obtain the following solution <cit.> f(r)=r^2(1-P/r)^1/2(1-k^2/2r^2), h(r)=r^2(1-P/r)^1/2, ϕ(r)=-√(3)/2ln(1-P/r),ψ_I = (k x, k y), where P is an integration constant corresponding to the location of the curvature singularity at r=P and it can be expressed in terms of temperature T and momentum dissipation strength k, see Eq. (<ref>). [ Note that the blackening factor is still different from f(r) = r^2(1-r_h^3/r^3) in the SAdS_4, even if one sets P=0. ] k is a positive constant of the linear axions, which can be understood as the strength of the momentum dissipation (disorder) in the backreacted setup. Remark that k was introduced to break the translational symmetry. In this study, we naively expect that k still holds some aspects of the strength of the momentum dissipation, even in the probe limit. The geometry is a black hole spacetime with the horizon at r=r_h = k/√(2). To avoid the naked singularity, the range of P is restricted by -∞<P<r_h. In the probe limit, the gravity sector is decoupled from the Maxwell field. Therefore, this solution is a neutral black hole solution. Note that this solution was also studied in Refs. <cit.>, and Ref. <cit.> without the axions. This is also a special neutral case of the charged black hole studied in Refs. <cit.>. (See also Appendix <ref>). The authors of Ref. <cit.> studied the same model for general q beyond the probe limit. We call the above solution as `dilatonic black hole'. In this study, we focus on the probe-limit analysis to investigate the qualitative property, and the analytic relation in this system. The Hawking temperature readsT = r_h/2π√(1-P/r_h). In order to obtain positive T, the range of P is restricted in -∞<P<r_h.On the other hand, the model also admits another neutral solution without the dilaton hair, which is given by <cit.>f(r) = r^2 [ 1 - k^2/2 r^2 - r_h^3/r^3( 1 - k^2/2 r_h^2) ], h(r) = r^2,ϕ(r) = 0,ψ_I = (k x, k y).We call it as `bald black hole' solution in this study. In this solution, k and r_h are independent parameters but the number of parameters does not change. Since the dilaton vanishes, the solution is same as the neutral solution in the linear axions model <cit.>. Now, the Hawking temperature is given byT = 1/8π6 r_h^2 - k^2/r_h.The range of k is restricted by T>0 as k<√(6) r_h. This solution is thermodynamically favored in the range T > k/2√(2)π <cit.>. Therefore, we need to consider this solution as the background metric rather than the dilatonic black hole (<ref>), at high temperatures where T > k/(2√(2)π).Under the probe limit, we can analyze the matter sector by considering the gravity, dilaton, and axions as non-dynamical background fields given by Eq. (<ref>). For the matter sector, the equations of motion are obtained as_μ(e^ϕ/√(3)F^μν)-iqΦ^*(∂^ν-iqA^ν)Φ+iqΦ(∂^ν+iqA^ν)Φ^*=0, D_μD^μΦ-B(ϕ)Φ=0.We choose the mass term of Φ asB(ϕ)=M^2cosh(τϕ),where M is a mass of the scalar field near r=∞, and τ is a constant determining the coupling with the dilaton field. In this study, we fix M^2=-2, which leads the dimension of the charged-scalar operator as Δ = 1 or 2. The authors of Ref. <cit.> indicate that the superconducting instability can be triggered easily at higher coupling τ.For more detail argument about B(ϕ), see Ref. <cit.>. Now, we consider the following ansatz:Φ = Φ(r), A = A_t(r)t,where Φ(r) can be considered as a real function. With the above ansatz, the equations of motion are writtenΦ^''(r)+(f^'(r)/f(r)+h^'(r)/h(r))Φ^'(r)+1/f(r)(q^2A_t(r)^2/f(r)-B(ϕ))Φ(r)=0, A_t^''(r)+(h^'(r)/h(r)+ϕ^'(r)/√(3))A_t^'(r)-2e^-ϕ(r)/√(3)q^2Φ(r)^2/f(r)A_t(r)=0 . Near the boundary, these fields have the following asymptotic expansionsΦ(r)=Φ^(1)/√(2)/r+Φ^(2)/√(2)/r^2+⋯ , A_t(r)=μ-ρ/r+⋯ .For the vector field, μ and ρ are dual to the chemical potential and charge density of the boundary conformal field theory, respectively. For the scalar field, both the leading and the subleading terms can be normalizable in our case <cit.>. Similarly to Ref. <cit.>, we refer the case when Φ^(i) = O_i and Φ^(j) = 0 as O_i theory, where i=1 or 2 and j≠ i.The choice of the O_2 theory is called standard quantization, whereas the O_1 theory is realized by adding corresponding boundary action. O_i is considered as a operator with dimension i in the boundary theory.For later convenience, we rewrite the coordinate by z:=r_h/r. In this coordinate, the horizon is located at z=1, and the boundary at z=0. Under this transformation, Eqs. (<ref>) and (<ref>) becomeΦ^''(z)+(2/z+f^'(z)/f(z)+h^'(z)/h(z))Φ^'(z)+r_h^2/z^4(q^2A_t(z)^2/f(z)^2-B(ϕ(z))/f(z))Φ(z)=0, A_t^''(z) -2e^-ϕ(z)/√(3)q^2r_h^2Φ(z)^2/z^4f(z)A_t(z)=0,respectively. Note that the prime denotes the derivative with respect to z here.§ SCALING RELATION BETWEEN TC AND K In this section, we investigate the phase boundaries by using both the numerical and analytic approximate methods. Using the approximate result, we find the linearly scaling relation between T_c and k as (<ref>) for large k in the O_1 theory. We also present numerical results of the scalar condensation.In the normal phase, i.e., Φ=0, Eq. (<ref>) becomesA_t^''(z)=0 .With the boundary condition (<ref>) and A_t(r_h)=0, the solution of this equation readsA_t(z)=μ(1-z),μ=ρ/r_h,where μ is an integration constant that can be read as the chemical potential in the boundary theory. ρ can be regarded as the charge density. §.§ Phase boundariesBelow T=T_c, the normal phase solution will be unstable for the perturbation of the charged scalar. We investigate the phase boundaries by finding the onset of the scalar instability. In the z coordinate, the expansion of Φ is written asΦ(z)=Φ^(1)/√(2)z/r_h+Φ^(2)/√(2)z^2/r_h^2+⋯.As we have mentioned, we have two choices of the normalizable modes. We consider the following ansatz of the perturbation for each theory. For the O_1 theory, we write Φ(z) asΦ(z)=⟨ O_1⟩/√(2)z/r_hF(z),F(0)=1, F^'(0)=0.For the O_2 theory, we write Φ(z) asΦ(z)=⟨ O_2⟩/√(2)z/r_h^2F(z),F(0)=0,F^'(0)=1.We also impose the regular condition for F(z) at z=1. The equation of motion (<ref>) becomesF^''(z)+(4/z+f^'(z)/f(z)+h^'(z)/h(z))F^'(z)+ (2/z^2+h^'(z)/zh(z)+r_h^2(q^2A_t(z)^2-B(ϕ(z))f(z))/z^4f(z)^2+f^'(z)/zf(z))F(z)=0 .This can be understood as the Strum-Liouville (SL) problem. The equation in the SL form becomesd/dz{p(z)F^'(z)}+q(z)F(z)+λ r(z)F(z)=0,wherep(z) =z^4f(z)h(z), q(z) =z^3f(z)h(z)(2/z+f^'(z)/f(z)+h^'(z)/h(z))-r_h^2B(ϕ(z))h(z), r(z) =(-1+z)^2r_h^4h(z)/f(z),λ =q^2μ^2/r_h^2. We can solve the above equation numerically. The minimum eigenvalue λ will correspond to the critical value of μ or T_c after rescaling parameters. Moreover, we can also study the approximation for the lowest eigenvalue, as follows. Supposing that Ψ_n is the eigenfunction for n-th eigenvalue λ=λ_n, we can formally write λ_n asλ_n =-∫_0^1[ ∂_z(p(z)Ψ_n^'(z))Ψ_n(z)+q(z)Ψ_n(z)^2]dz/∫_0^1 r(z)Ψ_n(z)^2dz=.-p(z)Ψ_n(z)Ψ_n^'(z)|_0^1+∫_0^1[p(z)Ψ_n^'(z)^2-q(z)Ψ_n(z)^2]dz/∫_0^1 r(z)Ψ_n(z)^2dz.This is called Rayleigh Quotient. We leverage several properties of the Strum-Liouville eigenvalue problem. One of them is the set of eigenfunctions is complete, i.e., F(z)∼∑_n=1^∞c_nΨ_n(z). Thus, we can generalize this formula for a trial function F(z) which is not a solution of the ordinary differential equation (<ref>) but satisfies the boundary conditions. In this study, we consider trial functions with one parameter α denoted by F_α.In the SL problem, the smallest eigenvalue λ_1 always exists. Correspondingly, λ_α, associated with the trial function F_α, also has a minimum. We writeλ̂_1 =min_α.-p(z)F_α(z)F_α^'(z)|_0^1+∫_0^1[p(z)F_α^'(z)^2-q(z)F_α(z)^2]dz/∫_0^1 r(z)F_α(z)^2dz.This becomes an estimation for the actual smallest eigenvalue: λ̂_1≈λ_1.We have to note that the background spacetime also exhibits the phase transition between Eq. (<ref>) and Eq. (<ref>). Since we are working in the probe limit, this phase transition occurs at T=k/(2√(2)π), regardless the charged-scalar instability. In the following, we first explore the charged-scalar instability in each background geometry. After that, we will show the correct phase boundaries as Fig. <ref> by combining the results obtained in each geometry.First, we consider the O_1 theory in each background geometry. We choose the following trial function with one parameter α in both background black-hole geometry:F_α(z)=1+α z^2.It satisfies the boundary conditions, F(0)=1 and F^'(0)=0. We obtain the approximate formula for the minimal eigenvalue λ̂_1. Figure <ref> shows the phase boundaries in the T_c/qμ–k/qμ plane obtained by the approximation, and also by the numerical way for comparison. Note that q can be scaled out in the probe limit analysis. [ In the backreacted case, q can no longer be scaled out; different q explains different setups. Similar to the original model of the holographic superconductor, it is expected that the probe limit corresponds to q→∞ limit. See Appendix <ref>. ] The approximation agrees with the numerical result well. Since the full expression of the approximation is too complicated, we do not exhibit it here. In the O_1 theory, the phase boundaries always lie in on T>k/(2√(2)π). In the case of the bald black hole, the curve is parameterized by k/r_h in 0<k/r_h<√(2). In k/r_h→√(2), the curve approaches to the dotted line denoting T=k/(2√(2)π). In the case of the dilatonic black hole, the curves are parameterized by P/r_h in -∞<P/r_h<0, and small |P|/r_h corresponds to large k/qμ. Expanding the approximation for τ=0 in small ϵ≡ - P/r_h, we obtainλ_F_α = 1/4 ln 2 - 2ϵ + ϵ^2.Using this leading expansion, T_c/qμ and k/qμ can be written asT_c/qμ = √(ln 2 - 1/2)/πϵ^-1/2 + ϵ^1/2,k/qμ = 2√(2ln 2 - 1)ϵ^-1/2 + ϵ^1/2.We obtain the scaling relation for large k asT_c ≈k/2√(2)π.The T_c approaches to the critical value of the phase transition of the background spacetime. In the case of the bald black hole, T_c approaches to this value for large k, too. It can be also confirmed from the numerical results for T_c/qμ at large k/qμ (, see Fig. <ref>). This result illustrates the critical temperature is directly proportional to the momentum dissipation strength in the O_1 theory. Similar results were obtained in <cit.> and <cit.>numerically in the regime beyond the probe limit (see Figure 1 in <cit.> and <cit.>). Next, we consider the O_2 theory in each background geometry. In the bald black hole, we employ the simple trial function with one parameter,F_α= z (1+ α z).In this case, the curve is parameterized by k/r_h in 0<k/r_h<√(6). In the dilatonic black hole, however, this simple trial function does not work well. We find that the following trial function with one parameter α, gives rational results for the wider range of k/μ in this case. It is given byF_α(z)= z( 1+α z)( 1- Pz/r_h) ^-√(3)/2τ.This trial function satisfies F(0)=0 and F^'(0)=1 so it is suitable for studying the O_2 theory. [One can also consider F_α = z( 1+α z)/1 + z( 1- Pz/r_h) ^-√(3)/2τ, which gives good agreement too. ] The exponential factor -√(3)/2τ is just same as those involved in the coupling term B(ϕ), see Eq. (<ref>). Figure <ref> shows the results of the phase boundaries in the T_c/qμ–k/qμ plane obtained by the approximation and the numerical method, for a comparison. The approximation almost agrees with the numerical result. Unlike the O_1 theory, the curves cross the line of T=k/(2√(2)π), and T_c reaches zero if τ is small. In the case of the dilatonic black hole, the curves are parameterized by P/r_h, but it can take a value in 0<P/r_h<1 in the O_2 theory. At P/r_h = 0, the result does not depend on τ which is located at (k/qμ,T_c/qμ) ≈ (0.41,0.049) on the dotted line. For large enough τ, T_c linearly depends on k but the coefficient is different from Eq. (<ref>). For τ=1/√(3), we numerically obtain T_c≈ 0.0193 k for large k. Such behavior in T<k/(2√(2)π) is qualitatively same as those studied in Ref. <cit.>.Finally, we show the correct phase boundaries for each theory. Combining the results in Fig. <ref> for the O_1 theory, and Fig. <ref> for the O_2 theory, we obtain the phase boundaries as Fig. <ref>. According to the critical value of T = k/(2√(2)π), which is shown as the dotted line, we have switched the using background geometry and the corresponding results. Note that the purple curves in T > k/(2√(2)π) are independent of the choice of τ because τ represents a coupling with the dilaton. For the O_1 theory, we employed only the purple curve corresponding to the bald black hole because all curves are located in T>k/(2√(2)π). The result for the O_2 theory corresponds to the fully backreacted result in Ref. <cit.>. We also provide further comparisons with the full analysis in Appendix <ref>.§.§ CondensationIn the superconducting phase, the charged scalar has nontrivial profiles below T=T_c. The scalar profile is obtained by solving the nonlinear ODEs for Φ and A_t. We conclude this section by showing the numerical results of the condensations for O_1 and the O_2 theory. Here, we set q=1 but the choice of q does not affect the results in the probe limit since it can be scaled out.Figure <ref> shows the condensates O_i as functions of T for O_i theories. We set τ=0 for simplicity. To obtain this figure, we have changed the using background geometry at the phase transition point T=k/(2√(2)π). We show this point as a small circle on the curve, if it exists below the superconducting phase transition point. The condensate O_1 grows as temperature decreases, whereas O_2 remains finite at T→ 0. In particular, there is a lower bound for the possible T in the O_1 theory. This behavior is same as those studied in <cit.>, and is a limitation in the probe limit analysis. [ In the O_2 theory with τ=0, T→0 limit is possible. ] Since the lowest temperature is always larger than T=k/(2√(2)π) for the O_1 theory, the results in the bald black hole, denoted as solid curves, are favored. We expect that the low-temperature divergence of O_1 will be cured by considering beyond the probe limit, similarly to Ref. <cit.>. One can see that O_1 is enhanced by increasing k for fixed μ. On the other hand, O_2 is suppressed as k increases. These behaviors of the condensates are consistent with the relation between T_c and k for each theory, shown as Fig. <ref>.§ LINEAR DEPENDENCE OF TC ON THE SUPERFLUID DENSITYIn this section, we compute the AC conductivity in our model to investigate the relation between T_c and the zero-temperature superfluid density n_s(0). The exact AC conductivity in the dilatonic black hole, in which corresponds to the normal phase, was studied in Ref. <cit.>. The superfluid density n_s can be read from the AC conductivity in the superconducting phase. We numerically compute the AC conductivity in the superconducting phase. As a result, we find the linear relation between T_c and n_s(0) in the O_2 theory. We also provide the approximate formula for the AC conductivity in the superconducting phase in our model.In order to compute the AC conductivity, we need to solve Eq. (<ref>) for the following perturbation ansatzA = A_t(r) dt + e^-i ω tA_x(r) dx,where A_x(r) is a small perturbation field. Linearizing Eq. (<ref>) about A_x(r), we obtainA_x^''(r)+(f^'(r)/f(r)+ϕ^'(r)/√(3))A_x^'(r)+(ω^2/f(r)^2-2q^2e^-ϕ(r)/√(3)/f(r)Φ(r)^2)A_x(r)=0.In the z-coordinate, it becomesA_x^''(z)+(2/z+f^'(z)/f(z)+ϕ^'(z)/√(3))A_x^'(z)+r_h^2(ω^2-2e^-ϕ(z)/√(3)q^2f(z)Φ(z)^2)/z^4f(z)^2A_x(z)=0.To obtain the physical result of the AC conductivity, we impose the infalling-wave boundary condition at the black hole horizon. From the equation of motion, the infalling-wave solution must have the near horizon behavior written asA_x(z) = ( 1 - z )^-iω/4π T G(z),where G(z) is a regular function at the horizon z=1. According to the prescription of the AdS/CFT correspondence with finite temperatures <cit.>, we can compute the AC conductivity byσ(ω) = - 1/iωlim_r→∞r^2 ∂_r A_x(r)/A_x(r) = r_h/iωlim_z→0∂_z A_x(z)/A_x(z).In the following, we study the AC conductivity in our model by using both the numerical and the analytical approximate methods in each phase. §.§ AC conductivity in the normal phaseIn our model, the AC conductivity in the normal phase can be written as the analytic expression which was studied in Ref. <cit.>. We briefly review the result. The normal phase is described by Φ(r) = 0 then Eq. (<ref>) becomesA_x^''(r)+(f^'(r)/f(r)+ϕ^'(r)/√(3))A_x^'(r)+ω^2/f(r)^2A_x(r)=0.Using the dilatonic black hole geometry (<ref>), the equation has regular singular points at r=± r_h and P. The solution is given byA_x(r) = C_0( r - r_h/r + r_h)^-iω/4π T_2F_1( ã, b̃; c̃; x̃r - r_h/r + r_h),where C_0 is a normalization constant, andã = -iω/2r_h( 1/√(1-P/r_h) -1/√(1+P/r_h)),b̃ = -iω/2 r_h( 1/√(1-P/r_h) +1/√(1+P/r_h)), c̃ = 1-iω/r_h/√(1-P/r_h),x̃ = P+r_h/P-r_h.Note that this solution is exactly same as those obtained in Ref. <cit.> for the 3-charge black hole. The form of the expression can be exchanged by using the property of the hypergeometric function. Using Eq. (<ref>), the AC conductivity is obtained asσ(ω) = 1/√(1-P/r_h) - r_h/i ω2 ãb̃x̃/c̃×_2F_1( 1+ã, 1+b̃; 1+c̃; x̃ ) /_2F_1( ã, b̃; c̃; x̃ ) .We show the results of Eq. (<ref>) in some cases in Fig. <ref>. From this expression, we can read off the DC conductivity asσ_DC = lim_ω→0σ(ω) = 1/√(1-P/r_h) = 1/2√(2)πk/T.The result agrees with the DC conductivity obtained in Refs. <cit.> with vanishing chemical potential, i.e., in the neutral limit. To obtain the ω-dependent AC conductivity and the T-dependent DC conductivity, the nonzero dilaton and the coupling between the dilaton and the Maxwell field are important <cit.>. If the theory has the S-duality, the conductivity becomes constant as those in the bald black hole that we will show next. For more details about this point, see Ref. <cit.> and section 3.4.6 of Ref. <cit.>. We can say that the presence of the nonzero dilaton plays a significant role for the emergence of the linear-T resistivity here. The above result is valid only during the dilatonic black hole (<ref>) is the true ground state at low temperature. If the bald black hole (<ref>) becomes the ground state, the infalling-wave solution for A_x in this geometry is obtained asA_x(z) = C_0( 1 - z )^-iω/4π T( 2 + 2 z + (2 - k̃^2) z^2 )^i ω/8 π T(i √(3- 2 k̃^2) - 1 -(2- k̃^2)z / i √(3- 2 k̃^2) + 1 +(2- k̃^2)z )^λ', λ' = 3-k̃^2/√(3 - 2 k̃^2)ω/8π T,where k̃ = k/r_h. Note that T is given by Eq. (<ref>) in the bald black hole. In this case, however, one obtains the conductivity as σ(ω) = 1 for any ω. Thus, the DC conductivity is also given by σ_DC = 1. As a result, the (DC) resistivity behavesρ_DC = 1/σ_DC = 0 T≤ T_c2√(2)π T / k T_c < T ≤ k/(2√(2)π) 1 k/(2√(2)π) < T .If T_c is larger than k/(2√(2)π), the middle regime disappears. ρ_DC does not jump at T=k/(2√(2)π) because Eq. (<ref>) becomes 1 there. The behavior of Eq. (<ref>) actually corresponds to the result in the full analysis with backreactions <cit.>.§.§ Numerical results for the AC conductivity in the SC phase In the superconducting phase, we need the numerical analysis to study the AC conductivity basically. With the benefit of the simplification in the probe limit, we can easily compute it by using the standard procedure in the holography. Figure <ref> shows the AC conductivity in the O_2 theory with τ=0 for k/μ=0.5 and various T near T_c. Note that T_c<k/(2√(2)π) in this case. We can use the dilatonic black hole as the background geometry. In the superconducting phase, below T_c, we can observe 1/ω dependence in the imaginary part of the AC conductivity, which implies the delta peak in the real part.Figure <ref> shows the AC conductivity in the O_1 theory in the dilatonic black hole. Note that the true ground state is the bald black hole sice T > k/(2√(2)π). We show this plot for the purpose of showing how the AC conductivity behaves if the dilatonic black hole is favored. For the purpose of comparing with the approximation we will explore later, we have scaled ω by O_1 here. At the low temperature, the real part of the conductivity exhibits the gap-like structure. This behavior is similar to those in the original model of theholographic superconductor.Another interesting observable in the superconducting phase is the superfluid density n_s, which is also called phase stiffness. This quantity can also be understood as the order parameter of the superconducting phase. It can be read from the low-frequency expansion of the AC conductivity as [ n_s defined in (<ref>) may be called as the phase stiffness rather than the superfluid density in some studies. In fact, the dimension of n_s is 1, which is different from the dimension of the number density. The number density of the super-fluid components, ρ_s, might be given by n_s = ρ_s/m^*, where m^* denotes the effective mass of the charged carrier. In the holographic superconductor models, one may take m^*=μ. See, e.g., section 6.4.1 of Ref. <cit.> for more details. ]σ(ω) = π n_s δ(ω) + i n_s/ω + ω.Remark that the real and the imaginary parts of the conductivity are related with each other by the demand of the causality. [ It is known as the Kramers-Kronig relationIm[σ(ω)]=-1/π𝒫∫_-∞^∞dω^' Re[σ(ω^')]/ω^'-ω,Re[σ(ω)]= 1/π𝒫∫_-∞^∞dω^' Im[σ(ω^')]/ω^'-ω,where 𝒫 denotes taking the Cauchy's principal value of the integral. ] Figure <ref> shows the numerical results of n_s as functions of T for various k. The behavior of n_s is similar to ⟨ O_i⟩ at low temperatures, see Fig. <ref>. Unlike Fig. <ref>, we can see from the numerical results that n_s(T) has linear behavior near T=T_c.In the O_2 theory with τ = 0, n_s remains finite in the vicinity of T=0 even in the probe limit analysis. Figure <ref> shows the relation between n_s at T=0 and T_c in the O_2 theory with τ=0. We find n_s(0) and T_c have weakly linear relation as shown in this figure. The relation can be fitted byn_s(0) ∼ 0.159 T_c + 0.00778 μ,which is shown as the dashed line in Fig. <ref>.The similar scaling relation is known as the Uemura relation for underdoped materials <cit.>. It was also observed in the overdoped side of copper oxides <cit.>. On the other hand, there is a universal scaling law between n_s(0) and T_c known as Homes' law given by n_s(0) ∝σ_DC(T_c)× T_c in unconventional superconductors <cit.>. Our result shows the linear relation between n_s(0) and T_c but σ_DC given by Eq. (<ref>) must enter here. Hence, the Homes' law is not held. We should note we are studying the probe limit. The test of the Homes' law in the fully backreacted setup in the Gubser-Rocha model was studied in <cit.>.We have to remark on some points about Fig. <ref>. Firstly, the exact zero temperature limit is actually subtle in our geometry because the location of the horizon and the singularity become the same point. We read off n_s(0), therefore, by the numerical AC conductivity for the parameter P/r_h = 0.99999 because P/r_h=1 corresponds to T=0 from Eq. (<ref>). The actual data are obtained as the low-temperature solution which is the order of T/T_c∼ 10^-3. Secondly, for small T_c, T_c(k) becomes a multivalued function of k in a very narrow range of k. In this case, it is unclear which .n_s(0)|_k corresponds to which T_c(k). Due to such a difficulty, we do not show the small T_c data in Fig. <ref>. Finally, there are the phase transition of the background spacetime. In Fig. <ref>, the red horizontal line indicates the critical value corresponding to T=k/(2√(2)π). Above this line, the bald black hole (<ref>) is the ground state. The data points in Fig. <ref> take this phase transition into account. T_c/μ is obtained by using the true ground state. Since n_s(0) is measured in the low temperature limit, we can compute it by using the dilatonic black hole (<ref>).§.§ Approximation for the AC conductivity in the SC phaseIn this section, we explore an approximate formula of the AC conductivity in the superconducting phase with the dilatonic black hole (<ref>). For this purpose, we employ the method developed in Ref. <cit.>. [ We also present another approximate formula inAppendix <ref>. ]We focus on the O_1-theory. In this case, Φ is written asΦ(z)=⟨ O_1 ⟩/√(2)z/r_hF(z),satisfying F(0)=1 and F'(0)=0. In the superconducting phase, the vector perturbation equation is given by Eq. (<ref>). Rewriting A_x(r)=e^-ϕ(r)/2√(3)B_x(r), we obtainB_x^''(r)+f^'(r)/f(r)B_x^'(r)+ω^2/f(r)^2B_x(r)=[2q^2e^-ϕ(r)/√(3)/f(r)Φ(r)^2+1/2√(3)(f^'(r)/f(r)ϕ^'(r)+ϕ^'(r)^2/2√(3)+ϕ^''(r))]B_x(r).Now, we consider the tortoise coordinater_*=∫_r_hdr/f(r)=1/2r_h√(1-P/r_h)ln|r-r_h|-√(1-P/r_h)/4(P-r_h)^2(r-r_h)+⋯.In this coordinate, the horizon is located at r_*=-∞, and the boundary is located at r_*=0. Note that r_* can be written as r_*≈ -z/r_h near the boundary. By using this coordinate, Eq. (<ref>) can be written asB̈_x(r_*)+ω^2B_x(r_*)=V(r)B_x(r_*),where the dot denotes ∂_r_*, and V is given byV(r)=2q^2e^-ϕ/√(3)fΦ^2+f^2/2√(3)(f^'/fϕ^'+ϕ^'^2/2√(3)+ϕ^'').The values of the potential at the horizon and boundary areV(r_h)=0,V(∞)≡ V_∞= qO_1^2+3/16P^2,respectively.Figure <ref> shows the shapes of the potential for various T/O_1 in the O_1 theory.To solve (<ref>), we utilize the approximate method developed in <cit.>. Now, we consider replacing V(r) in Eq. (<ref>) with a `mean value' denoted by V̅ which is a r-constant. Assuming ω^2 > V̅, we obtain the infalling wave solution asB_x(r_*)=Cexp( i√(ω^2-V̅)r_* ),where C is a normalization constant. The `mean value' is defined byV̅=∫_-∞^0dr_*V(r)|B_x(r_*)|^2/∫_-∞^0dr_*|B_x(r_*)|^2.The integrals diverge due to the infinite volume of the integral region. It will be regularized by introducing a large cutoff. [ In <cit.>, they considered it can be regularized by inserting a small imaginary part of ω. ] In this study, however, we just consider only the leading contribution at r=∞:V̅ = V_∞ = q O_1^2 + 3/16 P^2.This choice will be valid in a case where r_h ≪O_1. Using this result, A_x is approximated asA_x(r_*)≈ Cexp(i√(ω^2-q⟨ O_1⟩^2-3/16P^2) r_*-ϕ(r)/2√(3)).According to the standard prescription of the AdS/CFT correspondence <cit.>, and the Kubo formula, the conductivity is obtained as σ(ω)≈ iP/4ω+√(1-3/16P^2+q^2⟨ O_1⟩^2/ω^2).If P=0, the above result reduces to the approximate formula in the original model of the holographic superconductor, which is given by <cit.>σ(ω) ≈√(1- q^2 O_1^2/ω^2).Although the infalling boundary condition is only valid for ω > √(V̅), the approximate formula can fit even in a small ω region. Expanding Eq. (<ref>) in small ω, we obtain the superfluid density asn_s≈P/4+√(3/16P^2+q^2⟨ O_1⟩^2) ,T≪⟨ O_1⟩.Note that P is written in terms of k and T as P = k/√(2) - 4√(2)π^2 T^2/k. Thus, this is a function of T, k and O_1. For a comparison, we show Eqs. (<ref>) and (<ref>) with the numerical result, for a specific background solution, in Fig. <ref>. The background solution is parameterized by P/r_h = -2, and its temperature is almost T/T_c = 0.60. While the temperature is not sufficiently low, the AC conductivity roughly exhibits the gap. In this case, the gap of the value is almost given by ω≈O_1, and the curves for Eq. (<ref>) and Eq. (<ref>) are almost overlapping. Eq. (<ref>) also gives n_s/O_1≈ 0.9521, while the numerical result gives n_s/O_1≈ 0.9710. From Eq. (<ref>), the the superfluid density can be read as n_s/O_1 = 1. Indeed, Eq. (<ref>) can be said as a better approximation than Eq. (<ref>) but the correction is very small and difficult to see in Fig. <ref>. Note that Fig. <ref> corresponds to the solution above T=k/(2√(2)π) so the solution with the dilatonic black hole is not favored actually. We could not obtain the superconducting solution in T<k/(2√(2)π), where the dilatonic black hole is the ground state, for the O_1 theory. § DISCUSSION AND CONCLUSIONIn order to explore the scaling relation, we have studied the holographic superconductor known as the Gubser-Rocha model in the probe limit.We have shown the several properties of the holographic superconductor model, e.g., the critical temperature T_c, the condensations O_i, and also the AC conductivity σ(ω) in both phases. In the analysis of the critical temperature, we have found the scaling relation (<ref>) between T_c and k for the O_1 theory. We have also studied the relation between T_c and the superfluid density n_s at zero temperature in the O_2 theory. They exhibit the linear relation similar to the recent experiment of the high T_c superconductors <cit.>. We have also studied the approximate formula for the AC conductivity in the O_1 theory. The linear relation between T_c and the zero-temperature superfluid density n_s(0) is known as the Uemura relation <cit.>. The Uemura relation works reasonably for underdoped materials. The recent experiment also showed the similar relation in the overdoped materials <cit.>. Our result may imply that the O_2 theory with τ=0 of our model can describe such cuprates. On the other hand, the Homes' law <cit.> is known as the universal scaling relation, which works regardless of whether overdoped or underdoped materials. This relation is expressed as n_s(0) ∝σ_DC(T_c) T_c. The Homes' law involves the normal phase DC conductivity at T=T_c which will be inaccurate in the probe limit. Thus, this universal relation is not suitable for checking the model within the probe limit analysis. The Homes' law in the holographic superconductor model was tested in Ref. <cit.> in the analysis with backreactions. They concluded that the Homes' law holds in the O_2 theory with sufficiently large τ in this model. We have to note that the material in Ref. <cit.> violates the Homes' law. It was discussed in Ref. <cit.>. In this study, we focus on the superfluid density, but the behavior of the normal fluid density would be also important in the high T_c superconductor. Lately, it was revealed that the normal fluid density can survive even in the zero temperature limit, in some specific class of the holographic superconductors <cit.>. This is different behavior from the BCS theory, in which case the normal fluid density vanishes at the zero temperature. According to <cit.>, the zero-temperature normal fluid density remains for z>d+2, where z is the dynamical exponent and d+1 is spacetime dimension of the boundary theory. Since the Gubser-Rocha black hole corresponds to z=∞, it seems that our model is in this case. In our setup, the strength of the momentum dissipation (or disorder) k enters as an additional parameter of the system compared with the basic holographic superconductor(, see a review <cit.>). Although the physical role of k is obvious that it breaks the translational symmetry explicitly (in the full analysis with backreactions), the corresponding quantity in experiments is still unclear. It is considered that the translational symmetry is usually broken in materials but the strength of it will be varied under the change of the other parameters. In the experiments for the cuprates, the doping p has a significant role. The cuprates become superconductors in a specific range of the hole doping typically. As a result, the phase diagram behaves like a dome shape for the p-axis. It is considered that the change in the doping also induces the change in k. Another parameter, that can introduce the momentum dissipation, will be the strength of the defects. It can be controlled by, e.g., irradiation. Note that the translational invariance can be also broken by considering spatial profile of the chemical potential μ(x) <cit.>. In <cit.>, the random profile of μ(x) is considered for introducing the charged disorder, in the probe limit. The presence of k might be understood as an analogue of them, but it is independent of the charge. It would be interesting and significant to identify the precise relation between k and parameters in experiments. We have to remark that there are limitations caused by taking the probe limit. Firstly, the conductivity in the normal phase always becomes finite even though we set k=0. This is because the translational symmetry has already been broken due to taking the probe limit. Secondly, the low-temperature results often diverge and are not accurate. In the O_1 theory, this problem at low temperatures is serious. Actually, we could not obtain the solution of the superconducting phase below T=k/(2√(2)π), in the O_1 theory. We expect there is a mathematical reason for this problem, but it is unclear. The above problems will be cured by taking the backreactions into account. In Appendix <ref>, we have numerically checked the phase boundary obtained by the probe limit analysis agrees with the full result by considering large q with the appropriate scaling. While the probe limit involves the above problems, it makes the analysis of the model much easier.The neutral black hole solution (<ref>) can be utilized for studying the Gubser-Rocha superconductor model in the probe limit, as we have shown throughout this paper. This is the first study considering the probe limit in this model, as far as we know. [ The calculation of the AC conductivity with this black hole solution was studied in <cit.>. The same neutral geometry with vanishing k was also mentioned in Appendix B of <cit.>. ] It would be interesting to study a more complicated setup, which is hard to study in the full analysis with backreactions, with this background solution, e.g., studying the spatially dependent solutions <cit.>, and the time evolution of the system <cit.>. The running dilaton can bring different results from the standard setup of the holographic superconductor in the SAdS_4 spacetime. However, we have to keep in our minds that the dilatonic black hole is the true grand state only for T<k/(2√(2)π).It would be also interesting to study the effects of the magnetic field in our model. The dyonic black hole geometry in the Gubser-Rocha model corresponds to the normal phase vacua that was studied in Ref. <cit.> recently. In the probe limit, however, we can still use our neutral black hole geometry because the Maxwell fields are decoupled from the gravity. The corresponding study in the SAdS_4 can be found in Refs. <cit.>. The effect of the external magnetic field enters the system only through the Maxwell-scalar sector in the probe limit.It is expected that the phase boundary and the condensates will be deformed due to the effect of the magnetic field. On the other hand, the dynamical U(1) gauge field plays a crucial role in studying the several effects about the magnetic field <cit.>. It has been known that holographic superconductor model with the dynamical U(1) gauge fields represents the type II supreconductor. The penetration depth measured in the presence of the dynamical U(1) gauge fields would be a significant quantity which can be compared with experimental results more directly. We leave these for future studies.§ ACKNOWLEDGMENTSWe would like to thank Sang-Jin Sin, Matteo Baggioli, Xinmao Yin,Hui Xing, and Di-fan Zhou for their helpful comments. We are grateful to Hyun-Sik Jeong for suggestive comments, especially about the thermodynamic stability of the background black hole geometry. This work is partly supported by NSFC, China ( Grant No. 12275166 and No. 12311540141). § CHECKING THE CORRECTNESS OF THE PROBE LIMIT In this section, we compare the T_c–k curve obtained by the probe limit analysis with those obtained for general q. Firstly, we briefly review the charged black hole solutions of our model (<ref>), and its thermodynamic stability. From the full action (<ref>), the equations of motion are obtained as_μ(e^ϕ/√(3)F^μν)-iqΦ^*(∂^ν-iqA^ν)Φ+iqΦ(∂^ν+iqA^ν)Φ^*=0, ^2ϕ-1/4√(3)e^ϕ/√(3)F^2+2√(3)sinh(ϕ/√(3))-B^'(ϕ)Φ^2=0, D_μD^μΦ-B(ϕ)Φ=0,^2ψ_I=0.The Einstein's equation isR_μν- 1/2g_μν[R-1/4e^ϕ/√(3)F^2-1/2(∂ϕ)^2+6 cosh(ϕ/√(3))-1/2∑_I=1^2(∂ψ_I)^2-DΦ^2-B(ϕ)Φ^2]=1/2e^ϕ/√(3)F_μδF_ν^ δ+1/2∂_μϕ∂_νϕ+1/2∑_I=1^2(∂_μψ_I∂_νψ_I)+1/2(D_μΦ D_ν^*Φ^*+D_νΦ D_μ^*Φ^*).The normal phase solution is obtained as the following charged dilatonic black hole solution <cit.>:ds^2 = - f(r)dt + dr^2/f(r) + h(r) (dx^2 + dy^2),with h(r) = r^2 (1-P/r)^1/2, f(r) = h(r) [ 1 - k^2/2 r^2 - r_h^3/r^3( 1 - k^2/2 r_h^2) ], ϕ(r) = - √(3)/2ln( 1 - P/r),Φ = 0, ψ_I = k x^I, A = √(3 P r_h ( 1 - k^2/2 r_h^2))(1-r_h/r) dt, where r_h is the location of the black hole horizon and P is a physical parameter. Unlike Eq. (<ref>), r_h and k are independent parameters here. Note that this is the same solution as those in Ref. <cit.>, but the coordinate is different. Writing the radial coordinate in <cit.> as r̃, we obtain the relation between ours and their conventions as r̃+Q = r, r̃_h+Q = r_h and Q = P. The Hawking temperature and the chemical potential are related to the parameters byT = 1/8π6 r_h^2 - k^2/r_h√(1 - P/r_h),μ = √(3 P r_h (1-k^2/2r_h^2)).One can see that the neutral limit of Eq. (<ref>) is archived by setting r_h = k/√(2), whereas P=0 leads Eq. (<ref>). P/r_h→ 1 corresponds to the extremal limit.For the thermodynamic analysis, the grand potential (density) of this solution is given by <cit.>Ω(μ, T; k) = - r_h^3 ( 1 + 1-P/r_h/2k^2/r_h^2).The grand potential implicitly depends on μ and T via P and r_h.On the other hand, the model also admit the following solution: <cit.> h(r) = r^2, f = r^2 [ 1 - k^2/2r^2 + r_h^2 μ^2/4 r^4 - r_h^3/r^3( 1 - k^2/2r_h^2 + μ^2/4 r_h^2) ],A = μ(1 - r_h/r) dt,ϕ = Φ = 0,ψ_I = k x^I, where μ, k and r_h are integration constants, which parameterize the family of the solution. μ is directly read as the chemical potential. The temperature is given byT = r_h/4π(3 - k^2/2 r_h^2 - μ^2/4 r_h^2).One can see μ=0 leads the neutral solution of Eq. (<ref>). The grand potential of this solution is given by <cit.>Ω(μ, T; k) = -r_h^3 ( 1 + k^2/2 r_h^2 + μ^2/4 r_h^2).The grand potential implicitly depends on T, again.Let us now consider which solution and parameter range correspond to the ground state of the normal phase. From Eq. (<ref>), there are two distinct parameter regions yielding T,μ >0 for the dilatonic black hole; i) 0<P<r_h and 0< k< √(2) r_h ii) P < 0 and √(2) r_h < k < √(6) r_h. [ In terms of the original parameter r̃_h, the ranges of such parameter regions become more complicated. ] The both choice of the patches cover the entire region of the physical parameter-space like (k/T,μ/T). On the other hand, k<√(6 r_h^2 - μ^2/2) is obtained from Eq. (<ref>) for the bald black hole. To determine the ground state, we need to compare the ground potential among these cases. It was investigated in <cit.>, and the answer is that the P>0 patch of the dilatonic black hole is always ground state. Figure <ref> shows the competition of the grand potential among these solutions for fixed μ/T = 5.0. Ω_bald denotes Eq. (<ref>). Ω_P>0 and Ω_P<0 denote Eq. (<ref>) in the P>0 and P<0 patches, respectively. The curves exhibit that Ω_bald - Ω_P<0 <0, Ω_P>0 - Ω_P<0 < 0 and Ω_P>0 - Ω_P<0 <Ω_bald - Ω_P<0. It reads Ω_P>0 < Ω_bald < Ω_P<0. Therefore, we conclude that the P>0 patch is the ground state, which is thermodynamically favored <cit.>. We have checked this behavior of the grand potentials does not change for another choice of μ/T.Now, we consider the linear perturbation of the charged scalar around the normal phase solution. The form of the equation of motion for the charged scalar is unchanged from (<ref>) in the probe limit analysis. In the backreacted case, the equation can no longer be written in the form of the SL problem. We numerically solve the equation to find T_c (or μ_c). In general, we can obtain multiple eigenvalues and eigenfunctions. The solution with no node is considered as the most relevant solution to the instability.Figures <ref> and <ref> show the phase boundaries in the T/qμ–k/qμ plane for various q in each theory. The results for q=∞ denote the results obtained by using the neutral solution (<ref>) and (<ref>), i.e., the results in the probe limit. We set τ=0 here for simplicity. As we have mentioned, there are two patches giving reasonable results satisfying T>0 and μ>0. Although the P>0 patch is the ground state, we show the results for both patches here. In both patches, we have checked that the corresponding eigenfunctions do not have any node in their radial profile. In Fig. <ref>, one can see that the result of the P<0 patch agrees with the result in the neutral dilatonic black hole (<ref>) for large q, in the O_1 theory. Meanwhile, the result of the P>0 patch agrees with the result in the neutral bald black hole (<ref>) for large q. Thus, the results of the ground state in the charged case, which is the P>0 patch, correspond to the result of the ground state in the neutral limit. It is consistent. We can also see similar observation from Fig. <ref>, in the O_2 theory. Depending on the true ground state, the result of the P>0 patches approaches the result in the neutral bald solution above T=k/(2√(2)π), while it approaches the result in the neutral dilatonic solution below T=k/2(2√(2)/π). Note that the P>0 curve for q=0 corresponds to one of the results shown in Ref. <cit.>. These direct comparisons imply that we need to consider the phase transition of the black hole in the probe limit for obtaining results that agree with the results of the full analysis computed in the true ground state, for large q.§ LOW TEMPERATURE APPROXIMATION FOR THE AC CONDUCTIVITY IN THE SC PHASEIn this section, we show another approximation of the AC conductivity in the superconducting phase following the method studied in <cit.>. Unfortunately, however, it does not provide useful results in our model.In superconducting phase, to study the AC conductivity, we need to solve Eq. (<ref>).Imposing the infalling boundary condition at the horizon, we writeA_x(z)=(1-z)^-iω/4π TG(z)where G(z) is a regular function at z=1.G(z) can be expanded asG(z)=α_0+α_1(1-z)+α_2(1-z)^2+⋯ .Substituting Eq. (<ref>) into Eq. (<ref>), we getG^''(z)+(2/z+iω/2π T(1-z)+f^'(z)/f(z)+ϕ^'(z)/√(3))G^'(z)+ 1/48z^4{-3ω z^3(4iπ T(-2+z)+ω z)/π^2T^2(-1+z)^2+ 4/f(z)^2(12ω^2r_h^2-24e^-ϕ(z)/√(3)q^2r_h^2f(z)Φ(z)^2-iω z^4f(z)(3f^'(z)+√(3)f(z)ϕ^'(z))/π T(-1+z))}G(z)=0The equation involves Φ(z). We focus on the O_1 theory here.In order to solve Eq. (<ref>), we utilize the method which develops in <cit.>. We rewrite the coordinate as z = εζ, and expand the equation for small ε. We choose ε = 2π T/O_1 to consider the low temperature limit. For small ε, Eq. (<ref>) becomes[2]Gζ -(1-P̃)G + εiω̃/√(1-P̃)Gζ + ε^2 = 0,where P̃ = P/r_h and ω̃ = ω/r_h. We truncate this equation up ε^2. The general solution is obtained asG = e^ -iω/r_h/√(1-P/r_h)εζ{ c_1 e^-ζ√((1-P/r_h) - ε^2/4ω^2/r_h^2/1-P/r_h) + c_2 e^ζ√((1-P/r_h) - ε^2/4ω^2/r_h^2/1-P/r_h)}.In the original z coordinate, we writeG(z)≈ e^ -iω/r_h/√(1-P/r_h) z { c_1 e^+z ε^-1/√(1-P/r_h) + c_2 e^-z ε^-1/√(1-P/r_h)}.Here, we have dropped the higher ε terms in the exponential terms. Recalled A_x(z)= (1-z)^-iω/4π TG(z), the conductivity is obtained asσ(ω) ≈ iO_1/ω1-c_1/c_2/1+c_1/c_2. Now, we consider fixing the ratio c_1/c_2 by using the equation at z=1:G'(1) (1-i ω̃/√(1-P̃)) +1/16 G(1) (16 q^2 Φ(1)^2 - 4 ω̃^2 1-2P̃/(1-P̃)^2 - 4 i ω̃/√(1-P̃)) =0.Note that the equation involves unknown parameter Φ(1). We assume that the scalar profile can be approximated by Φ(z)=O_1/r_h √(2) z. Then, Φ(1) is given by Φ(1)=O_1/(r_h√(2)). Using Eqs. (<ref>), (<ref>) and (<ref>), the ratio c_1/c_2 becomes (ω→ 0)c_1/c_2≈ e^2√(1-P̃)/ε[ 1 +2 Õ_1^2(1-P̃)^2 - i ω̃( 3 (1-P̃)^3/2 - i ω̃(3 - 4P̃) ) /2(1-P̃)^2(-iω̃ + √(1-P̃))ε],where Õ_1 = O_1/r_h. We have dropped the higher ε terms again. Plugging this to (<ref>), the conductivity is written asσ(ω) ≈i Õ_1/ω̃[ 1 - 2 e^-2 Õ_1( 1 +2 Õ_1^2(1-P̃)^2 - i ω̃( 3(1-P̃)^3/2 - i ω̃(3 - 4 P̃) ) /2 Õ_1 (1 - P̃)^3/2 (√(1-P̃) - i ω̃))^-1].The low frequency expansion is obtainedσ(ω) ≈iÕ_1/ω̃[ 1 - 2/1+Õ_1 e^-2Õ_1 + i ω̃2 Õ_1^2 - 3/Õ_1 (Õ_1 + 1)^2 √(1-P̃) e^-2Õ_1 + ω̃].The leading coefficient corresponds to the superfluid density which can be readn_s ≈O_1 - 2 O_1/O_1 + r_h e^-2 O_1/r_h.However, the approximation becomes better when O_1≫ r_h, so the contribution from the second term is very small. Neglecting the second term, we obtain the same result for n_s read from Eq. (<ref>).99bcs J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of Superconductivity, Phys. Rev. 108, 1175–1204 (1957). muller J. G. Bednorz and K. A. Müller, Possible high Tc superconductivity in the Ba-La-Cu-O system, Z. Physik B - Condensed Matter 64, 189–193 (1986). mkwu M. K. Wu et al.,Superconductivity at 93 K in a new mixed-phase Y-Ba-Cu-O compound system at ambient pressure, Phys. Rev. Lett. 58, 908 (1987). hiroshi H. Maeda et al., A New High-Tc Oxide Superconductor without a Rare Earth Element, Jpn. J. Appl. Phys. 27 L209 (1988).zzsheng Z. Z. Sheng and A. M. Hermann, Bulk superconductivity at 120 K in the Tl–Ca/Ba–Cu–O system, Nature 332, 138–139 (1988).jjhamlin J. J. Hamlin et al., Superconductivity in single crystals of LaFePO, J. Phys.: Condens. Matter 20, 365220 (2008). [arXiv:0806.1265 [cond-mat.supr-con]]kamihara Y. Kamihara et al., Iron-based layered superconductor La[O_1-xF_x]FeAs (x= 0.05-0.12) with Tc= 26 K,J. Am. Chem. Soc. 130, 11, 3296–3297 (2008).xhchen X. H. Chen, T. Wu, G. Wu, R. H. Liu, H. Chen and D. F. Fang, Superconductivity at 43 K in SmFeAsO_1-xF_x, Nature 453, 761–762, (2008).haihuwen H. H. Wen et al., Superconductivity at 25 K in hole-doped (La_1-xSr_x)OFeAs, Europhys. Lett. 82 (2008) 17009 (2008). [arXiv:0803.3021 [cond-mat.supr-con]]mm M. Mondal et al., Phase Fluctuations in a Strongly Disordered s-Wave NbN Superconductor Close to the Metal-Insulator Transition, Phys. Rev. Lett. 106, 047001 (2011). soumyajit S. Mandal et al., Destruction of superconductivity through phase fluctuations in ultrathin a-MoGe films, Phys. Rev. B 102, 060501 (2020).[arXiv:2003.12398 [cond-mat.supr-con]]sudhansu S. S. Mandal and T. V. Ramakrishnan, Microscopic free energy functional of superconductive amplitude and phase: Superfluid density in disordered superconductors, Phys. Rev. B 102, 024514 (2020).lulian L. Hetel, T. R. Lemberger and M. Randeria, Quantum critical behaviour in the superfluid density of strongly underdoped ultrathin copper oxide films, Nature Phys. 3, 700-702 (2007).microwavetl2201 D. Deepwell et al., Microwave conductivity and superfluid density in strongly overdoped Tl_2Ba_2CuO_6+δ, Phys. Rev. B, 88, 214509 (2013).copperoxides I. Božović, X. He, J. Wu and A. T. Bollinger, Dependence of the critical temperature in overdoped copper oxides on superfluid density, Nature 536, 309-311 (2016). AdSCFT J. M. Maldacena, The Large N limit of superconformal field theories and supergravity, Adv. Theor. Math. Phys. 2, 231-252 (1998).[arXiv:hep-th/9711200 [hep-th]]Witten:1998qj E. Witten, Anti-de Sitter space and holography, Adv. Theor. Math. Phys. 2, 253-291 (1998).[arXiv:hep-th/9802150 [hep-th]]Gubser:1998bc S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Gauge theory correlators from noncritical string theory, Phys. Lett. B 428, 105-114 (1998).[arXiv:hep-th/9802109 [hep-th]] zaanen J. Zaanen, Y. Liu, Y-W. Sun, K. Schalm,Holographic Duality in Condensed Matter Physics, (Cambridge, Cambridge University Press, 2016). ammon M. Ammon and J. Erdmenger, Gauge/Gravity Duality: Foundations and Applications.(Cambridge, Cambridge University Press, 2015).Baggioli:2019rrs M. Baggioli, Applied Holography: A Practical Mini-Course, (Springer, 2019). [arXiv:1908.02667 [hep-th]] Cai:2015cya R. G. Cai, L. Li, L. F. Li and R. Q. Yang, Introduction to Holographic Superconductor Models, Sci. China Phys. Mech. Astron. 58, no.6, 060401 (2015)[arXiv:1502.00437 [hep-th]]bhs S. A. Hartnoll, C. P. Herzog, G. T. Horowitz, Building an AdS/CFT superconductor, Phys. Rev. Lett. 101, 031601 (2008). [arXiv:0803.3295 [hep-th]]Hartnoll:2008kx S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, Holographic Superconductors, JHEP 12, 015 (2008).[arXiv:0810.1563 [hep-th]]Siopsis:2010uq G. Siopsis and J. Therrien, Analytic Calculation of Properties of Holographic Superconductors, JHEP 05, 013 (2010).[arXiv:1003.4275 [hep-th]]GaryG. T. Horowitz and M. M. Roberts, Zero Temperature Limit of Holographic Superconductors, JHEP 11, 015 (2009). [arXiv:0908.3677 [hep-th]] Ge:2010aa X. H. Ge, B. Wang, S. F. Wu and G. H. Yang, Analytical study on holographic superconductors in external magnetic field, JHEP 08, 108 (2010).[arXiv:1002.4901 [hep-th]]Franco:2009yz S. Franco, A. Garcia-Garcia and D. Rodriguez-Gomez, A General class of holographic superconductors, JHEP 04, 092 (2010).[arXiv:0906.1214 [hep-th]]Cai:2010cv R. G. Cai, Z. Y. Nie and H. Q. Zhang, Holographic p-wave superconductors from Gauss-Bonnet gravity, Phys. Rev. D 82, 066007 (2010).[arXiv:1007.3321 [hep-th]]Roychowdhury:2012hp D. Roychowdhury, Effect of external magnetic field on holographic superconductors in presence of nonlinear corrections, Phys. Rev. D 86, 106009 (2012).[arXiv:1211.0904 [hep-th]]Gangopadhyay:2013qza S. Gangopadhyay, Holographic superconductors in Born-Infeld electrodynamics and external magnetic field, Mod. Phys. Lett. A 29, 1450088 (2014).[arXiv:1311.4416 [hep-th]]Erdmenger:2013zaa J. Erdmenger, X. H. Ge and D. W. Pang, Striped phases in the holographic insulator/superconductor transition, JHEP 11, 027 (2013).[arXiv:1307.4609 [hep-th]]Ge:2015fmu X. H. Ge, S. J. Sin and S. F. Wu, Universality of DC Electrical Conductivity from Holography, Phys. Lett. B 767, 63-68 (2017).[arXiv:1512.01917 [hep-th]]Sheykhi:2016kqh A. Sheykhi, H. R. Salahi and A. Montakhab, Analytical and Numerical Study of Gauss-Bonnet Holographic Superconductors with Power-Maxwell Field, JHEP 04, 058 (2016).[arXiv:1603.00075 [gr-qc]]Qiao:2020hkx X. Qiao, L. OuYang, D. Wang, Q. Pan and J. Jing, Holographic superconductors in 4D Einstein-Gauss-Bonnet gravity, JHEP 12, 192 (2020).[arXiv:2005.01007 [hep-th]]Zhao:2023qms Z. Zhao, W. Cai and S. Ishigaki, Doped Holographic Superconductors in Gubser-Rocha model, (2023). [arXiv:2309.14851 [hep-th]] yuan J. Yuan et al. Scaling of the strange-metal scattering in unconventional superconductors. Nature 602, 431–436 (2022). jiang X. Y. Jiang et al. Interplay between superconductivity and the strange-metal state in FeSe. Nat. Phys. 19, 365–371 (2023).Phillips:2022nxs P. W. Phillips, N. E. Hussey and P. Abbamonte, Stranger than metals, Science 377, no.6602, eabh4273 (2022).[arXiv:2205.12979 [cond-mat.str-el]]Gubser:2009qt S. S. Gubser and F. D. Rocha, Peculiar properties of a charged dilatonic black hole in AdS_5, Phys. Rev. D 81, 046001 (2010).[arXiv:0911.2898 [hep-th]] Andrade:2013gsa T. Andrade and B. Withers, A simple holographic model of momentum relaxation, JHEP 05, 101 (2014).[arXiv:1311.5157 [hep-th]]Gouteraux:2014hca B. Goutéraux, Charge transport in holography with momentum dissipation, JHEP 04, 181 (2014).[arXiv:1401.5436 [hep-th]] Zhou:2015qui Z. Zhou, Y. Ling and J. P. Wu, Holographic incoherent transport in Einstein-Maxwell-dilaton Gravity, Phys. Rev. D 94, no.10, 106015 (2016).[arXiv:1512.01434 [hep-th]]Kim:2017dgz K. Y. Kim and C. Niu, Diffusion and Butterfly Velocity at Finite Density, JHEP 06, 030 (2017).[arXiv:1704.00947 [hep-th]]Jeong:2018tua H. S. Jeong, K. Y. Kim and C. Niu, Linear-T resistivity at high temperature, JHEP 10, 191 (2018).[arXiv:1806.07739 [hep-th]]Homes' law H. S. Jeong and K. Y. Kim, Homes’ law in holographic superconductor with linear-T resistivity, JHEP 03, 060 (2022).[arXiv:2112.01153 [hep-th]]Davison:2013txa R. A. Davison, K. Schalm and J. Zaanen, Holographic duality and the resistivity of strange metals, Phys. Rev. B 89, no.24, 245116 (2014).[arXiv:1311.2451 [hep-th]]Jeong:2023ynk H. S. Jeong, Quantum Chaos and Pole-Skipping in Semi-Locally Critical IR, [arXiv:2309.13412 [hep-th]]. Ren:2021rhx J. Ren and W. Zheng, Analytic AC conductivities from holography, Phys. Rev. D 105, no.6, 066013 (2022).[arXiv:2109.07481 [hep-th]]Ren:2022qkr J. Ren and H. Xie, Holographic superconductors at zero density, Phys. Rev. D 107, no.10, 106004 (2023).[arXiv:2206.03498 [hep-th]]Caldarelli:2016nni M. M. Caldarelli, A. Christodoulou, I. Papadimitriou and K. Skenderis, Phases of planar AdS black holes with axionic charge, JHEP 04, 001 (2017).[arXiv:1612.07214 [hep-th]] Cremonini:2016bqw S. Cremonini and L. Li, Criteria For Superfluid Instabilities of Geometries with Hyperscaling Violation, JHEP 11, 137 (2016).[arXiv:1606.02745 [hep-th]]Witten:2001ua E. Witten, Multitrace operators, boundary conditions, and AdS/CFT correspondence, (2001). [arXiv:hep-th/0112258 [hep-th]] tomas T. Andrade, S. A. Gentle, Relaxed superconductors, JHEP 06 140 (2015) [arXiv:1412.6521[hep-th]] Son:2002sd D. T. Son and A. O. Starinets, Minkowski space correlators in AdS / CFT correspondence: Recipe and applications, JHEP 09, 042 (2002).[arXiv:hep-th/0205051 [hep-th]]Herzog:2007ij C. P. Herzog, P. Kovtun, S. Sachdev and D. T. Son, Quantum critical transport, duality, and M-theory, Phys. Rev. D 75, 085020 (2007).[arXiv:hep-th/0701036 [hep-th]]Hartnoll:2016apf S. A. Hartnoll, A. Lucas and S. Sachdev, Holographic quantum matter, (Cambridge, The MIT Press, 2018) [arXiv:1612.07324 [hep-th]] Uemura:1989 Y. J. Uemura et al., Universal Correlations between T_c and n_s/m^* (Carrier Density over Effective Mass) in High-T_c Cuprate Superconductors, Phys. Rev. Lett. 62, 2317 (1989). Uemura:1991 Y. J. Uemura et al., Basic Similarities among Cuprate, Bismuthate, Organic, Chevrel-Phase, and Heavy-Fermion Superconductors Shown by Penetration-Depth Measurements, Phys. Rev. Lett. 68, 2712 (1991).Homes:2004 C. C. Homes et al., Universal scaling relation in high-temperature superconductors, Nature 430, 539 (2004). [arXiv:cond-mat/0404216 [cond-mat.supr-con]] Homes:2005 C. C. Homes et al., Scaling of the superfluid density in high-temperature superconductors, Phys. Rev. B 72, 134517 (2005). [arXiv:cond-mat/0410719 [cond-mat.supr-con]] dordevic S. V. Dordevic and C. C. Homes, Superfluid density in overdoped cuprates: Thin films versus bulk samples, Phys. Rev. B 105, 214514 (2022).Gouteraux:2019kuy B. Goutéraux and E. Mefford, Normal charge densities in quantum critical superfluids, Phys. Rev. Lett. 124, no.16, 161604 (2020).[arXiv:1912.08849 [hep-th]].Gouteraux:2020asq B. Goutéraux and E. Mefford, Non-vanishing zero-temperature normal density in holographic superfluids, JHEP 11, 091 (2020).[arXiv:2008.02289 [hep-th]]. bmM. Baggioli, K. Y. Kim, L. Li and W. J. Li, Holographic Axion Model: a simple gravitational tool for quantum matter, Sci. China Phys. Mech. Astron. 64, no.7, 270001 (2021)[arXiv:2101.01892 [hep-th]].Horowitz:2012ky G. T. Horowitz, J. E. Santos and D. Tong, Optical Conductivity with Holographic Lattices, JHEP 07, 168 (2012)[arXiv:1204.0519 [hep-th]].Arean:2013mta D. Arean, A. Farahi, L. A. Pando Zayas, I. S. Landea and A. Scardicchio, Holographic superconductor with disorder, Phys. Rev. D 89, no.10, 106003 (2014).[arXiv:1308.1920 [hep-th]]. Arean:2015sqa D. Arean, L. A. Pando Zayas, I. S. Landea and A. Scardicchio, Holographic disorder driven superconductor-metal transition, Phys. Rev. D 94, no.10, 106003 (2016).[arXiv:1507.02280 [hep-th]]. Keranen:2009vi V. Keranen, E. Keski-Vakkuri, S. Nowling and K. P. Yogendran, Dark Solitons in Holographic Superfluids, Phys. Rev. D 80, 121901 (2009).[arXiv:0906.5217 [hep-th]]Zeng:2016gqj H. B. Zeng, Y. Tian, Z. Fan and C. M. Chen, Nonlinear Conductivity of a Holographic Superconductor Under Constant Electric Field, Phys. Rev. D 95, no.4, 046014 (2017).[arXiv:1611.06798 [hep-th]]Yang:2023dvk P. Yang, M. Baggioli, Z. Cai, Y. Tian and H. Zhang, Holographic Dissipative Spacetime Supersolids, Phys. Rev. Lett. 131, no.22, 221601 (2023).[arXiv:2304.02534 [hep-th]]Ge:2023yom X. H. Ge and Z. Xu, Thermo-electric Transport of Dyonic Gubser-Rocha Black Holes, (2023). [arXiv:2310.12067 [hep-th]]Domenech:2010nf O. Domenech, M. Montull, A. Pomarol, A. Salvio and P. J. Silva, Emergent Gauge Fields in Holographic Superconductors, JHEP 08, 033 (2010).[arXiv:1005.1776 [hep-th]] Salvio:2012at A. Salvio, Holographic Superfluids and Superconductors in Dilaton-Gravity, JHEP 09, 134 (2012).[arXiv:1207.3800 [hep-th]].Salvio:2013jia A. Salvio, Transitions in Dilaton Holography with Global or Local Symmetries, JHEP 03, 136 (2013).[arXiv:1302.4898 [hep-th]]. | http://arxiv.org/abs/2312.16029v2 | {
"authors": [
"Zhenguo Wang",
"Xian-Hui Ge",
"Shuta Ishigaki"
],
"categories": [
"hep-th",
"cond-mat.str-el",
"cond-mat.supr-con"
],
"primary_category": "hep-th",
"published": "20231226123831",
"title": "Dependence of the critical temperature and disorder in holographic superconductors on superfluid density"
} |
theoremTheorem[section] definition[theorem]DefinitionlemmaLemma corollary[theorem]Corollary exampleExample[section] proposition[theorem]Proposition Electromagnetic Property Sensing: A New Paradigm of IntegratedSensing and Communication Yuhua Jiang, Feifei Gao, and Shi JinY. Jiang and F. Gao are with Institute for Artificial Intelligence, Tsinghua University (THUAI),State Key Lab of Intelligent Technologies and Systems, Tsinghua University,Beijing National Research Center for Information Science and Technology (BNRist), Beijing, P.R. China (email: [email protected], [email protected]).S. Jin is with the National Mobile Communications ResearchLaboratory, Southeast University, Nanjing 210096, China (e-mail: [email protected]).==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Integrated sensing and communication (ISAC) hasopened up numerous game-changing opportunities for realizing future wireless systems. In this paper, we develop a novel material sensing scheme that utilizes OFDM pilot signals in ISAC systems to sense the electromagnetic (EM) property and identify the material of the target. Specifically, we first establish an end-to-end EM propagation model by means of Maxwell equations, where the electrical properties of the material are captured by a closed-form expression for the non-line-of-sight (NLOS) channel, incorporating the Lippmann-Schwinger equation and the method of moments (MOM) for discretization.We then model the relative permittivity and conductivity distribution (RPCD) within a specified detection region. Based on the sensing model, we introduce a multi-frequency-based material sensing method by which the RPCD can be reconstructed from compressive sensing techniques that exploits the joint sparsity structure of the contrast source vector. To improve the sensing accuracy, we design a beamforming strategy from the communications transmitter based on the Born approximation, which can minimize the mutual coherence of the sensing matrix. The optimization problem is cast in terms of the Gram matrix and is solved iteratively to obtain the optimal beamforming matrix. Simulation results demonstrate the efficacy of the proposed method in achieving high-quality RPCD reconstruction and accurate material classification. Furthermore, improvements in RPCD reconstruction quality and material classification accuracy are observed with increased signal-to-noise ratio (SNR) or reduced target-transmitter distance.Electromagnetic property sensing, material sensing, integrated sensing and communication (ISAC), compressive sensing (CS), orthogonal frequency division multiplexing (OFDM)§ INTRODUCTION Electromagnetic (EM) property is a key indicator in sensing the material of targets. Material sensing plays a pivotal role in various industries, enabling automation processes to operate efficiently and accurately <cit.>.Besides, material sensing technology can be used in inspection and imaging of a human body, which meets the increasing demands from social security, environmental monitoring, and medical applications. In conventional material sensing techniques,the infrared emissivity as a material specific property is investigated to increase the material classificationreliability <cit.>.However, the conventional infrared material sensing techniquestypically demand for specialized devices, which are costly and challenging to manufacture and maintain <cit.>.Besides, since infrared light can hardly penetrate the target,when the target surface is covered with camouflage coating, the conventional infrared sensing methods may not be applicable.Recently, integrated sensing and communication (ISAC) hasopened up numerous game-changing opportunities for realizing future wireless sensing systems.ISAC allows communications systems andsensing systems to share the scarce radio spectrum, which saves a large amount of resource cost <cit.>.In contrast to the dedicated sensing or communication functionality, the ISAC design methodology exhibits two types of gains.Firstly, the shared use of limited resources, namely, spectrum, energy and hardware platforms, offers improved efficiency for both sensing and communications (S&C), and thus provides integration gain.Secondly, mutual assistance between S&C may offer coordination gain and further boost the dual performance. Hence, ISAC is expected to benefit both communication and sensing functionality in the near future <cit.>. Due to its numerous advantages, ISAC is envisioned to be a key enabler for many future applications including intelligent connected vehicles, Internet of Things (IoT), and smart homes and cities <cit.>.While ISAC has achieved notable success in localization <cit.>, tracking <cit.>, imaging <cit.>, and various other applications <cit.>, the realm of material sensing within ISAC remains unexplored in existing literature.Since the EM property of the target material is implicitly encoded into the channel state information (CSI) from the transmitter to the receiver, there are promising prospects for incorporating material sensing capabilities into the ISAC paradigm. Sensing the target material not only extend the capabilities of ISAC but also fill a critical blank in the current research landscape. The integration of material sensing within the ISAC framework will present a novel approach that promises to enrich the spectrum of applications and functionalities offered by wireless sensing systems.As far as we know, there is no literature on material sensing schemes in the framework of wireless communications systems.In this paper, we develop a novel material sensing scheme that utilizes OFDM pilot signals in ISAC systems to sense the EM property and identify the material of the sensing target.The main contributions are summarized as follows: ∙ ISAC material sensing formulation: We establish an end-to-end EM propagation model from the perspective of Maxwell equations. A closed-form expression for a vector related to material EM property is derived, incorporating the Lippmann-Schwinger equation and the method of moments for discretization. The objective is to reconstruct the relative permittivity and conductivity distribution (RPCD) within a specified detection region that contains the target.∙ Compressive sensing-based RPCD reconstruction: We introduce a multi-frequency-based material sensing method using compressive sensing techniques, where the joint sparsity of the contrast source vector is exploited to reconstruct RPCD.∙ Beamforming design for enhanced sensing: To further improve the sensing accuracy, weoptimize the sensing matrix by designing the transmitter beamforming matrix based on the Born approximation. The design involves minimizing the difference between the Gram matrix of the designed beamforming matrix and a target matrix.The optimization problem is cast in terms of the Gram matrix and is solved iteratively to obtain the optimal beamforming matrix.Simulation results demonstrate that the proposed method can achieve both high-quality RPCD reconstruction and accurate material classification. Moreover, the RPCD reconstruction quality and material classification accuracy can be improved by increasing the SNR at the receiver, or by placing the target closer to the transmitter.The rest of this paper is organized as follows.Section 2 presents the system model and formulates the material sensing problem. Section 3 derives the end-to-end electromagnetic propagation formula. Section 4 elaborates the strategy of sensing target material with multiple measurements. Section 5 describes the transmitter beamforming design criterion. Section 6 provides the numerical simulation results, andSection 7 draws the conclusion.Notations: Boldface denotes vector or matrix;j corresponds to the imaginary unit; (·)^H, (·)^T, and (·)^* represent the Hermitian, transpose, and conjugate, respectively;∘ denotes the Khatri-Rao product;* denotes the Hadamard product; ⊗ denotes the Kronecker product; vec(·) denotes the vectorization operation; | a | denotes the length of the complex number a;𝐀[:,i] denotes the submatrix composed of all elements in column i of matrix 𝐀; 𝐈 and 1 denote the identity matrix and the all-ones vector with compatible dimensions;|𝐚| denotes the vector composed of lengths of each element in complex vector 𝐚; ‖𝐚‖_2 denotes ℓ2-norm of the vector 𝐚;(·) and (·) denote the real and imaginary part of complex vectors or matrices, respectively;‖𝐀‖_F denotes Frobenius-norm of the matrix 𝐀.The distribution of a circularly symmetric complex Gaussian (CSCG) random vector with zero mean and covariance matrix 𝐀 is denoted as 𝒞 𝒩 (0, 𝐀). § SYSTEM MODEL AND PROBLEM FORMULATION Consider a downlink communications scenario, in which an N_t-antenna base station (BS) communicates withan N_r-antenna mobile userwith orthogonal frequency division multiplexing (OFDM) signaling.The transmitter adopts fully digital precoding structure where the number of RF chains N_RF is equal to the number of antennas N_t. To enable material sensing, we focus on the pilot transmission, in which a total of K subcarriers are used to transmit pilots.For each subcarrier, I pilot symbols are transmitted. Suppose a target scatters the pilot signals from the transmitter to the receiver as shown in Fig. <ref>. The general channel from the transmitter to the receiver for the kth subcarrierconsists of one line-of-sight (LOS) path channel 𝐇_LOS,k and one non-line-of-sight (NLOS) scattering path channel 𝐇_NLOS,k. Thus, the received signals can be formulated as:𝐲̅_k = (𝐇_LOS,k+𝐇_NLOS,k) 𝐰_k x_k+𝐧̃_k, where 𝐰_k ∈ℂ^N_t is the digital beamforming weights on all transmitting antennas; x_k is the normalized pilot symbol;𝐧̃_k ∼𝒞 𝒩(0, σ_k^2 𝐈_N_r) is the complex Gaussian noise at the receiver for the kth subcarrier. Denote 𝐰̃_k = 𝐰 x_k for notational brevity. Since only the signals transmitted through the NLOS path carry the information of the target EM property, we need to extract the desired signals transmitted through the NLOS path as: 𝐲̃_k = 𝐇_NLOS,k𝐰̃_k +𝐧̃_k = 𝐲̅_k - 𝐇_LOS,k𝐰̃_k. Knowing the position of the transmitter and the receiver, we can calculate the i,jth element of 𝐇_LOS,k as <cit.>:𝐇_LOS,k[i,j] = λ_k √(G_r G_t)/4 π R_i,j e ^ -j2πR_i,j/λ_k ,where G_r and G_t denote the antenna gains of the receiver and the transmitter; λ_k denote the wavelength of the kth subcarrier; R_i,j denotes the distance between the ith antenna of the receiver and the jth antenna of the transmitter. Assume prior knowledge has determined that the target is contained within a certain sensing domain D, which can be discretized into a total of M small square elements. The NLOS channel between the transmitter and the multi-antenna receiver can then be described as:𝐇_NLOS,k = 𝐇_2,k𝐗_k 𝐇_1,k,where𝐇_1,k∈ℂ^M × N_tand 𝐇_2,k∈ℂ^N_r × Mare the channel matrix of the transmitter-to-target path and the channel matrix of the target-to-receiver path, respectively, while 𝐗_k ∈ℂ^M × M represents the influence on the signal transmission caused by the existence of target.The location of the target is implicitly incorporated in the formulation of 𝐇_1,k and 𝐇_2,k, which can be leveraged to estimate the location of the target by the received signals in conventional ISAC literature<cit.>. The material-related EM property of the target is implicitly incorporated in the formulation of 𝐗_k ∈ℂ^M × M, which can be leveraged to estimate the material of the target. Once the positions of the receiver, the sensing domain D, and the transmitter are known,𝐇_1,k and 𝐇_2,k can be calculated by the computational EM methods such as Method of Moments (MOM) <cit.>, Finite Element Method (FEM) <cit.>, and Finite-Difference Time-Domain (FDTD) <cit.>.In this paper, the objective is to retrieve the EM property of the target using the received signals and to further identify the material of the target.§ MODELING THE END-TO-END EM PROPAGATION In this section, a closed-form expression of 𝐗_k is derived to unfold the influence on the received signals caused by the target scattering process.The target EM property sensingis equivalent to reconstructing the difference between the complex relative permittivity of the target and the complex relative permittivity of air at each point 𝐫^' in domain D, denoted as χ_k(𝐫^').Since the complex relative permittivity of air is approximately equal to 1, we can formulate the contrast function as: χ_k(𝐫^')=ϵ_r(𝐫^')+j σ(𝐫^') / (ϵ_0 ω_k) -1,where ϵ_r(𝐫^') is the real relative permittivity at point 𝐫^', σ(𝐫^') is theconductivity at point 𝐫^', ϵ_0 is the vacuum permittivity, and ω_k=2π f_k is the angular frequency of electromagnetic waves for the kth subcarrier. The distributions of ϵ_r(𝐫^') and σ(𝐫^') are the target that needs to be recovered in the material sensing task.The total electric field corresponding to the kth subcarrierinside the domain D can be expressed using the Lippmann-Schwinger (LS) equation as <cit.>, <cit.>E_k^t (𝐫) = E_k^i (𝐫)+k_k^2∫ _DG_k(𝐫,𝐫') χ_k (𝐫') E_k^t (𝐫')d𝐫',𝐫∈ D ,where k_k=2π/λ_k is the wavenumber of the kth subcarrier, while E_k^t (𝐫),E_k^i (𝐫),and G_k(𝐫,𝐫') denote the total electric field, the incident electric field, and the Green's function, respectively. In 2D transverse magnetic (TM) scenarios, G_k(𝐫,𝐫') can be formulated as <cit.>, <cit.>G_k(𝐫,𝐫')=j H_0^(2)(k_k|𝐫-𝐫^'|) / 4 ,where H_0^(2) is the zero-order Hankel function of the second kind.Additionally, it is assumed that the contrast function χ_k(𝐫^') is constant within each element inD and is 0 outside D. Let the vectors 𝐄_k^i∈ℂ^M × 1 , 𝐄_k^t∈ℂ^M × 1, and χ_k ∈ℂ^M × 1 collect the incident electric field, the total electric field, and χ_k(𝐫) on each grid of D, respectively.Denote 𝐆_k as the discretized matrix of the integral kernelk_k^2 G_k(𝐫-𝐫^') when both 𝐫 and 𝐫' are on the discretized grids within the domain D in (<ref>).The total electric field at the scattering object can be obtained by adding the incident field and the scattered field, which means 𝐄_k^t satisfies:𝐄_k^t=𝐄_k^i+𝐄_k^s=𝐄_k^i+ 𝐆_k diag (χ_k) 𝐄_k^t. In order to yield a closed-form expression for 𝐄_k^t with respect to diag (χ_k), we reformulate equation (<ref>) in the non-linear form as:𝐄_k^t=(𝐈-𝐆_k diag (χ_k))^-1𝐄_k^i . The equivalent contrast source in D is defined as diag (χ_k)𝐄_k^t, which can be regarded as the source of scattering EM waves <cit.>, <cit.>. Then, the kth subcarrier scattering electric intensity on the N_r receiving antennas is denoted by 𝐄^s_k ∈ℂ^N_r × 1 and is formulated as:𝐄^s_k= 𝐇_2,kdiag (χ_k)𝐄_k^t +𝐧̃_k = 𝐇_2,kdiag (χ_k) (𝐈-𝐆_k diag (χ_k))^-1𝐄_k^i+ 𝐧̃_k .Thus, the received signal transmitted from the target scattering is written as:𝐲̃_k = 𝐇_2,kdiag (χ_k) (𝐈-𝐆_k diag (χ_k))^-1𝐇_1,k𝐰_k+𝐧̃_k.Therefore, the closed-form expression of 𝐗_k can be written as:𝐗_k = diag (χ_k) (𝐈-𝐆_k diag (χ_k))^-1 .Suppose a total of I pilot signals are transmitted, and denote 𝐖̃_k =[𝐰_k,1, 𝐰_k,2, ⋯, 𝐰_k,I] ∈ℂ^N_t × I as the digital transmitter beamformer stacked by time for the kth subcarrier. Then the I received pilot signals 𝐘_k∈ℂ^N_r ×I are 𝐘_k=[𝐲̃_k,1,𝐲̃_k,2,⋯,𝐲̃_k,I] = 𝐇_2,kdiag (χ_k) (𝐈-𝐆_k diag (χ_k))^-1𝐇_1,k[𝐰_k,1, 𝐰_k,2, ⋯, 𝐰_k,I]+ [𝐧̃_k,1, 𝐧̃_k,2, ⋯, 𝐧̃_k,I]=𝐇_2,kdiag (χ_k) (𝐈-𝐆_k diag (χ_k))^-1𝐇_1,k𝐖̃_k +[𝐧̃_k,1, 𝐧̃_k,2, ⋯, 𝐧̃_k,I]. Denote 𝐧_k=[𝐧_k,1^⊤, 𝐧_k,2^⊤, ⋯, 𝐧_k,I^⊤]^⊤. Then, we can vectorize 𝐘_k to 𝐲_k ∈ℂ^I N_r ×1 as:𝐲_k= vec(𝐘_k) (a)=[(𝐖̃_k^⊤𝐇_1,k^⊤(𝐈-𝐆_k diag (χ_k))^-⊤) ∘𝐇_2,k] χ_k +𝐧_k, where (a)= in (<ref>) comes from the equality: vec(𝐀diag(𝐛) 𝐂)=(𝐂^T ∘𝐀) 𝐛.In order to derive a quasi-linear sensing model, we further define the sensing matrix 𝐃_k ∈ℂ^I N_r × M as: 𝐃_kΔ=(𝐖̃_k^⊤𝐇_1,k^⊤(𝐈 -𝐆_k diag (χ_k))^-⊤) ∘𝐇_2,k .Then the sensing equation with respect to χ_k can be formulated as: 𝐲_k = 𝐃_k χ_k +𝐧_k.Note that 𝐲_k is nonlinear with respect to χ_k, because 𝐃_k also depends on χ_k. § MULTI-FREQUENCY EM PROPERTY SENSING AND MATERIAL IDENTIFICATION §.§ Compressive Sensing Based MethodIn this section, we will introduce the EM property sensingand material identification method by fusing multi-frequency received pilot signals. The sensing equation (<ref>) for the kth subcarrier can be written as:𝐲_k=𝐃_k χ_k +𝐧_k=𝐃_k(ε-1+j/ω_k ε_0σ) + 𝐧_k .Denote the central angular frequency as ω_c and the corresponding contrast vector as χ_c = ε-1+j/ω_c ε_0σ. Seperating the real and imaginary part of the equation, we can derive the non-dimensional equation as: [[ (𝐲_k); (𝐲_k) ]]=[[(𝐃_k) -(𝐃_k);(𝐃_k)(𝐃_k) ]][[𝐈0;0 ω_c/ω_k𝐈 ]] [[ε-1; 1/ω_c ε_0σ ]] + [[ (𝐧_k); (𝐧_k) ]]. For notational brevity, we reformulate (<ref>) as:𝐳_k=𝐄_k 𝐬 + [[ (𝐧_k); (𝐧_k) ]],where 𝐳_k = [ (𝐲_k)^⊤ , (𝐲_k)^⊤]^⊤, 𝐬=[(ε-1)^⊤,(1/ω_c ε_0σ)^⊤]^⊤, and 𝐄_k denotes the coefficient matrix in front of 𝐬. Note that 𝐬 is irrelevant to subcarrier frequencies.Therefore, we can stack (<ref>) for different subcarriers by column as:[[ 𝐳_1; ⋮; 𝐳_K ]]=[[ 𝐄_1; ⋮; 𝐄_K ]] 𝐬+ [[ (𝐧_1); (𝐧_1); ⋮; (𝐧_K); (𝐧_K); ]].Define the general vector of measurement and the general sensing matrix as:𝐳̃ = [𝐳_1^⊤, ⋯, 𝐳_K^⊤]^⊤ , 𝐄̃ = [𝐄_1^⊤, ⋯, 𝐄_K^⊤]^⊤.Since the target occupies only a small fraction of space in the detection domain D, most of the elements of 𝐬 are 0 corresponding to the air. Moreover, (ε-1) and 1/ω_c ε_0σshare the same support set. Thus, if K̃ sampling points of D are occupied by the target, then(ε-1) and 1/ω_c ε_0σ will beK̃-sparse, and 𝐬 will be jointly K̃-sparse.According to the compressive sensing techniques, 𝐬 can be reconstructed as the solution to the following mixed (ℓ1,ℓ2)-norm minimization problem min𝐬_1,2 s.t. 𝐳̃-𝐄̃𝐬_2 ⩽ε^',where ε^' represents the predefined noise level that has to be chosen appropriately, while 𝐬_1,2 is the mixed (ℓ1,ℓ2)-norm defined as:𝐬_1,2Δ=∑ _m=1^M√(s_m^2+s_m+M^2) .The mixed (ℓ1,ℓ2)-norm tends to enforce the joint sparsity of 𝐬, because ϵ-1 and σ have the same support set.Based on the generalized multiple measurement vector (GMMV) model, the key point in (<ref>) is to utilize the joint sparsity structure for improving the sensing ability. Problem (<ref>) is a basis pursuit denoising (BPDN) problem that canbe effectively solved by CVX using spectral projected gradient (SPG) method due to its convexity.The order of computational complexity is 𝒪(K̃^3 M^3) according to <cit.>.§.§ Iterative EM Property Sensing FormulationSince 𝐄̃ intrinsically depends on 𝐬,we need to update 𝐄̃ after solving (<ref>) and update 𝐬 iteratively.Let the superscript denote the number of iterations.For the initial iteration step, the Born approximation (BA) 𝐄_D^t≈𝐄_D^i is applied <cit.>. BA is accurate if the scattered field is relatively small compared to the incident field on the scatterer and thus can be used to calculate the initial guess <cit.>. Using BA, we can compute the initial sensing matrix𝐃_k^0 = (𝐖̃_k^⊤𝐇_1,k^⊤) ∘𝐇_2,k. Then, 𝐃_k^0 is used to compute 𝐄_k^0 in (<ref>) and 𝐄̃^0 in (<ref>). By solving (<ref>) with 𝐄̃^0, we can further calculate 𝐬^0.With the initial guess 𝐬^0, the proposed iteration procedure of the inversion algorithm is as follows:Step 1: Calculate χ_k^n for all subcarriers using 𝐬^n according to (<ref>).Step 2: Calculate 𝐃_k^n using χ_k^n for all subcarriers according to (<ref>).Step 3: Calculate 𝐄_k^n using 𝐃_k^n for all subcarriers according to (<ref>) and (<ref>) and assemble 𝐄_k^n into 𝐄̃^n according to (<ref>).Step 4: Calculate 𝐬^n+1using 𝐄̃^n according to (<ref>).Step 5: If the predetermined convergence criterion is satisfied, then STOP. Otherwise, go to Step 1.It is important to note that except for Step 4,the remaining steps do not involve inverse operations and only include linear operations.Thus, the algorithm is based on recursive linear approximation to cope with the non-linearity of the EM property sensing problem. Next, we analyze the computational complexity. In each iteration, the computational complexity of Step 1 is 𝒪(K M); the computational complexity of Step 2 is 𝒪((M^3+IN_tM+IM^2+IN_rM)K); the computational complexity of Step 3 is 𝒪(N_r M K); the computational complexity of Step 4 is 𝒪(K̃^3 M^3). By combining them, we finally have the overall computational complexity as𝒪([(M^2+IN_t+IM+IN_r)MK+K̃^3 M^3] N_iter), where N_iter represents the number of iterations for convergence.§.§ Material Identification MethodologySuppose that the object is known to be composed of one of several possible materials, whose permittivity and conductivity are measured precisely in advance.Note that only materials with obvious differences in permittivity or conductivity can be distinguished.The material identification methodology consists of two steps: first clustering and then classification.In order to determine the material of the target, we first need to distinguish between the part of domain D occupied by air and the part occupied by the target. To accomplish this, we utilize the K-means clustering algorithm to divide the sampling points in D into two categories <cit.>.Since the relative permittivity and conductivity have different dimensions, we adopt the dimensionless and scale-invariant Mahalanobis distance in the K-means clustering algorithm. The cluster centroid of the air, representing the average permittivity and conductivity values of the air, is expected to be close to the (1,0) point. On the other hand, the cluster centroid of the target, representing its average permittivity and conductivity, will be far from the (1,0) point. After clustering is performed, the next step is to determine the material category of the target. This is done by calculating the Mahalanobisdistance between the cluster centroid of the target material and the ground-truth values of the permittivity and conductivity for each possible material. The target is then classified into the material category with the shortest Euclidean distance, indicating the closest match between the measured values and the known properties of the materials.§ BEAMFORMING MATRIX DESIGN METHODOLOGY In this section, the beamforming matrix is designed to optimize the initial sensing matrix 𝐄̃^0 based on BA.The beamforming matrix design is carried out before the sensing iteration, and is unrelated to the target RPCD distribution. For notational brevity, we will omit the superscript 0 in the remainder of this section. The restricted isometry property (RIP) <cit.> and the mutual coherence <cit.> of the sensing matrix 𝐄̃are the two most common criteria to evaluate the stable recovery performance of (<ref>). Although the RIP provides a tighter bound, it is NP-hard to evaluate <cit.>. The mutual coherence μ of 𝐄̃ is defined as the maximum of μ_ij(𝐄̃) fori ≠ j defined as:μ_i j (𝐄̃)=|𝐄̃[:,i]^⊤𝐄̃[:,j]|/𝐄̃[:,i]_2𝐄̃[:,j]_2=|G_i j|/√(|G_i i||G_j j|),where G_i j denotes the (i,j)th element of the Gram matrix 𝐄̃^⊤𝐄̃.The Gram matrix of the general sensing matrix can be decomposed into the weighted sum of the Gram matrix of the sensing matrix for all K subcarriers according to (<ref>):𝐄̃^⊤𝐄̃= ∑_k=1^K 𝐄_k^⊤𝐄_k= ∑_k=1^K [[ (𝐃_k) -ω_c/ω_k(𝐃_k); (𝐃_k)ω_c/ω_k(𝐃_k) ]] ^ ⊤[[ (𝐃_k) -ω_c/ω_k(𝐃_k); (𝐃_k)ω_c/ω_k(𝐃_k) ]] (a)=∑_k=1^K [[(𝐃_k^H 𝐃_k) -(ω_c/ω_k) (𝐃_k^H 𝐃_k); (ω_c/ω_k)(𝐃_k^H 𝐃_k) (ω_c/ω_k)^2(𝐃_k^H 𝐃_k) ]] ,where (a)= comes from the following equality:𝐃_k^H 𝐃_k =(𝐃_k)^⊤ (𝐃_k) +(𝐃_k)^⊤ (𝐃_k) +j [(𝐃_k)^⊤ (𝐃_k) -(𝐃_k)^⊤ (𝐃_k) ].In order to minimize the mutual coherence of the general sensing matrix 𝐄̃, we need to minimize the absolute value of off-diagonal elements in𝐄̃^⊤𝐄̃. According to (<ref>), the objective is tominimize the absolute value of real part of the off-diagonal elements in𝐃_k^H 𝐃_k and the absolute value of imaginary part of all elements in𝐃_k^H 𝐃_k for all k. Since the diagonal elements in𝐃_k^H 𝐃_k are all real numbers, the objective is to minimize the absolute value of both real and imaginary parts of the off-diagonal elements, i.e, to minimize the mutual coherence of 𝐃_k for all k.Substituting the BA-based initial guess 𝐃_k^0 = (𝐖̃_k^⊤𝐇_1,k^⊤) ∘𝐇_2,k into 𝐃_k^H 𝐃_k, we can compute the (i,j)th element of 𝐃_k^H 𝐃_k as:(𝐃_k^H 𝐃_k ) [i,j]=((𝐖̃_k^⊤𝐇_1,k^⊤)[:,i] ⊗𝐇_2,k[:,i])^H ((𝐖̃_k^⊤𝐇_1,k^⊤)[:,j] ⊗𝐇_2,k[:,j])= ((𝐖̃_k^⊤𝐇_1,k^⊤)[:,i]^H ⊗𝐇_2,k[:,i]^H ) ((𝐖̃_k^⊤𝐇_1,k^⊤)[:,j] ⊗𝐇_2,k[:,j]) (a)=( (𝐖̃_k^⊤𝐇_1,k^⊤)[:,i]^H (𝐖̃_k^⊤𝐇_1,k^⊤)[:,j] ) ⊗( 𝐇_2,k[:,i]^H 𝐇_2,k[:,j] ) (b)=( (𝐖̃_k^⊤𝐇_1,k^⊤)[:,i]^H (𝐖̃_k^⊤𝐇_1,k^⊤)[:,j] )( 𝐇_2,k[:,i]^H 𝐇_2,k,[:,j] ) ,where (a)= comes from the equality (𝐀⊗𝐁)(𝐂⊗𝐃)=𝐀𝐂⊗𝐁𝐃,while (b)= holds true because the Kronecker product is taken between two scalars and is equivalent to the scalar product. Integrating (<ref>) for all elements in 𝐃_k^H 𝐃_k, we can formulate𝐃_k^H 𝐃_k as: 𝐃_k^H 𝐃_k = ( (𝐖̃_k^⊤𝐇_1,k^⊤)^H (𝐖̃_k^⊤𝐇_1,k^⊤) ) * (𝐇_2,k^H 𝐇_2,k) . Since 𝐇_2,k^H 𝐇_2,k is determined by the positions of the target and the receiver,𝐇_2,k^H 𝐇_2,k is not subject to change. Thus, minimizing the off-diagonal elements of 𝐃_k^H 𝐃_k is equivalent to minimizing the off-diagonal elements of (𝐖̃_k^⊤𝐇_1,k^⊤)^H (𝐖̃_k^⊤𝐇_1,k^⊤).Hence, we focus on designing (𝐖̃_k^⊤𝐇_1,k^⊤)^H (𝐖̃_k^⊤𝐇_1,k^⊤) in (<ref>) to minimize the mutual coherence of 𝐃_k.Since we consider a fully-digital beamforming transmitter, 𝐃_k can be designed in the identical way for any subcarrier of an arbitrary index k. Thus, we will omit the subscript kfor notational brevity in the remainder of this section.Denote𝐃_p Δ=𝐇_1 ^ ⊤ and 𝐖Δ=𝐖̃^⊤.Similar to the mutual coherence based measurement matrix designs, the following optimization problem can be cast in terms of the Gram matrix 𝐆_p = (𝐖𝐃_p)^H(𝐖𝐃_p) ∈ℂ^M × M: 𝐖̂ = min _𝐖 ,α‖(𝐖 𝐃_p)^H (𝐖 𝐃_p)- α𝐓‖ _F ^2 s.t. tr(𝐖 ^H 𝐖 ) ≤P/K ,where 𝐓∈ℂ^M × M denotes the target matrix of 𝐆_p; P denotes the general power budget at the transmitter and is assumed to be evenly allocated to all K subcarriers; α denotes the auxiliary scaling factor that does not change the mutual coherence of the target matrix of 𝐆_p according to (<ref>). The mutual coherence is not affected by the scaling factor that is utilized to satisfy the power constraint. The reason why we call the design criterion (<ref>) as mutual coherence based is because of the relationship between the Gram matrix and the mutual coherence given in (<ref>).In order to build arobust sensing system that is able to deal with the representation error, a promising approach is to employ the Gram of the dictionary as the target Gram <cit.>, <cit.>. In <cit.>, 𝐓 is adesigned using the element-wise absolute value of 𝐃_p^H 𝐃_p, where the element of 𝐓 is formulated as:T_ij = |𝐃_p[:,i]^H 𝐃_p[:,j] |, 1≤ i,j ≤ M . Since multiplying 𝐖 by a scalar regarding the power does not change the mutual coherence of 𝐖𝐃_p, we can release the power constraint and neglect the scaling factor in (<ref>) to find an unnormalized 𝐖 without the power budget constraint. Thus, (<ref>) can be transformed into an unconstrained optimization problem over the beamforming matrix 𝐖 as: 𝐖̂ = min _𝐖 ‖ (𝐖 𝐃_p)^H (𝐖 𝐃_p)-𝐓‖ _F ^2 .Let the singular value decomposition (SVD) of 𝐃_𝐩 be𝐃_𝐩=𝐔_𝐃_pΣ_𝐃_p𝐕_𝐃_p^H.By defining Ψ= 𝐖 𝐔_𝐃_pΣ_𝐃_p, (<ref>) can be written as:Ψ̂ =min _Ψ‖𝐕_𝐃_pΨ^HΨ𝐕_𝐃_p^H - 𝐓‖ _F ^2.Since 𝐕_𝐃_p is a unitary matrix that does not change the Frobenius norm by multiplication, (<ref>) is equivalent to:Ψ̂ =min _Ψ‖Ψ^HΨ-𝐕_𝐃_p^H𝐓𝐕_𝐃_p‖ _F ^2,which is a quadratic optimization problem that can be recast in terms of the Gram matrix 𝐆_Ψ=Ψ^H Ψ and the new target matrix 𝐙=𝐕_𝐃_p^H 𝐓𝐕_𝐃_p as: 𝐆̂_Ψ = min _𝐆_Ψ‖𝐆_Ψ -𝐙‖ _F ^2 s.t. 𝐆_Ψ≽0,rank (𝐆_Ψ) ≤ I.The rank constraint in (<ref>) comes from the fact that the dimensions of Ψ are I × N_t, which determines the rank of 𝐆_Ψ∈ℂ^N_t × N_t to be no greater than I. The semi-definite low-rank approximation problem in (<ref>) is solved iteratively in <cit.> and analytically in <cit.>.Following <cit.>, the optimal solution of (<ref>) can be obtained using the Eckart-Young Theorem as <cit.>: 𝐆̂_Ψ = ∑_i=1^min{ I , z }λ _i𝐪_i𝐪_i^H,where the Hermitian target matrix 𝐙 has the eigen-decomposition: 𝐙=𝐐Λ𝐐^H=∑_i=1^M λ_i 𝐪_i 𝐪_i^H with λ_1 ≥…≥λ_M, and z denotes the number of non-negative eigenvalues of 𝐙. Once the optimal positive semidefinite matrix 𝐆̂_Ψ is obtained by (<ref>), Ψ̂ that satisfies Ψ̂^H Ψ̂=𝐆̂_Ψ is found by the eigen-decomposition of 𝐆̂_Ψ:Ψ̂= Λ_I 𝐐_I^H,ifI ≤ z[𝐐_z Λ_z^H , 0]^H,ifI > z ,where 𝐐_I (or 𝐐_z) is found by taking the first I (or z) columns of 𝐐, and Λ_I (or Λ_z) is defined as the diagonal matrix that only takes the first I (or z) columns and rows of Λ.Then, the optimal unnormalized beamforming matrix 𝐖̂ that solves (<ref>) is found as:𝐖̂ =Ψ̂Σ_𝐃_p^-1𝐔_𝐃_p^H .Considering the power budget P evenly allocated to K subcarriers at the transmitter, the optimal beamforming matrix should be normalized as: 𝐖̂_n = √(P/K)𝐖̂/‖𝐖̂‖_F. §.§ Computational ComplexityIn this subsection, we will analyze the computational complexity of designing the beamforming matrix for each subcarrier. The computational complexity of calculating SVD of 𝐃_p is 𝒪(min(N_t^2 M,N_t M^2)); the computational complexity of calculating𝐓 and 𝐙 is 𝒪( M^2 N_t + N_t^2 M); the computational complexity of calculating𝐆̂_Ψ is 𝒪(min(I,z) M^2); the computational complexity of calculatingΨ̂ and 𝐖̂ is 𝒪(min(I,z) N_t^2 ); the computational complexity of calculating 𝐖̂_̂n̂ is 𝒪(I N_t). Thus, the overall computational complexity of designing the beamforming matrices for all K subcarriersis given by 𝒪( (M^2 N_t + N_t^2 M+min(I,z) M^2+min(I,z) N_t^2)K ). §.§ Effect of the Transmitter-Target Distance on SensingIn this subsection, we will study the effect of the transmitter-target distance based on the effective degrees of freedom (EDOF) of the transmitter-target sensing system <cit.>. Define the correlation matrix for the transmitter-target channel for an arbitrary subcarrier as 𝐑=𝐇_1 𝐇_1^H. The EDOF of the transmitter-target sensing system is a function of 𝐑 denoted by Ξ, and can be approximately calculated as <cit.>, <cit.>: Ξ = (tr(𝐑)/𝐑_F)^2=(∑_i=1^M σ_i)^2/∑_i=1^M σ_i^2,where σ_i is the ith eigenvalue of 𝐑.In the far-field sensing scenarios, the leading eigenvalue of 𝐑 is significantly larger than the other eigenvalues, and the EDOF is close to 1 corresponding to the only dominant sensing mode where a planar wave travels from the transmitter to the target <cit.>.Thus, the target is required to be located within the near field of the transmitter for EDOF significantly larger than 1, which leads to accurate RPCD reconstruction and material identification. § SIMULATION RESULTS AND ANALYSISSuppose the center of the target is located at the origin. We consider a uniform linear array (ULA) transmitter equipped with N_t=1024 antennas. The ULA transmitter is parallel to the y-axis, and its center is located at (x_t,y_t) = (d,0) m, where d is the distance from the transmitter to the target. Suppose a ULA receiver equipped with N_r=16 antennasis located at (x_r,y_r) = (-20,-20) m and is parallel to the ULA transmitter. The inter-element spacing for both the transmitter and the receiver is set as λ_c/2.The central frequency is f_c = 30 GHz and the frequency step of OFDM subcarriers is Δ f = 30 KHz. The total number of subcarriers is set to be K = 32. In each subcarrier, I=32 pilot OFDM symbols are transmitted.Suppose prior knowledge determines that the target is located within the region D = [-1,1] × [-1,1] m^2. We choose the number of sampling points in the domain D as M = 32 × 32 = 1024.Each sampling point in D can be regarded as a pixel in the image, i.e., the picture is composed of 1024 pixels. More pixels could hardly improve the RPCD reconstruction quality due to the resolution limit. Besides, we introduce the normalized mean square error (NMSE) of RPCD reconstruction as the criterion to quantitatively describe the EM property sensing performance NMSE= 10 log_10𝐬-𝐬̂_2^2/𝐬_2^2=10 log_10ϵ_r-ϵ̂_r_2^2 + 1/ω_c^2 ϵ_0^2σ-σ̂_2^2/𝐬_2^2.According to (<ref>), the NMSE of RPCD combines the estimation error of both ϵ and σ, which comprehensively assess the EM property sensing performance.In order to provide a visual example of the mutual coherence of the general sensing matrix 𝐄̃, we show the absolute values of Gram matrix 𝐄̃^⊤𝐄̃ normalized by its largest element in Fig. <ref>. It is seen from Fig. <ref> that the off-diagonal components are generally much smaller than the diagonal components, indicating that the mutual coherence is relatively small after designing the beamforming matrix.However, with an increase in the distance between the transmitting antennas and the target, the mutual coherence also increases correspondingly. This is because as the distance between the transmitting antennas and the target increases, the channel matrix can become more ill-conditioned or closer to being singular.The correlation degree of adjacent lattice points in D is then increased, which will be reflected in the reduction of the reconstructed RPCD resolution.Correspondingly the change of mutual coherence with the increase of the number of subcarriers is presented in Fig. <ref>.It is seen from Fig. <ref> that μ decreases with the increase of K, which indicates that the multi-frequency scheme enhances the sensing performance.When K reaches a certain threshold, μ scarcely changes, where the distance dominates the sensing performance. When the distance becomes larger, the mutual coherence of 𝐄̃ becomes larger. In order to demonstrate the change of transmitter-target channel EDOF with the increase of the number of sampling points in the domain D, we show the EDOF for the central-frequency subcarrier with the increase of M in Fig. <ref>. We define L as the limit value of Ξ when M approaches infinity in (<ref>). EDOF increases with the increase of M almost linearly until it reaches an upper bound. As shown in Fig. <ref>, when the distance between the target and the transmitter becomes larger, the upper bound of EDOF is smaller and is reached at a smaller M,indicating that the RPCD reconstruction quality is worse.§.§ RPCD Reconstruction Performance versus SNR We investigate NMSE of reconstructed RPCD versus the SNR at the receiver, as shown in Fig. <ref>.We set the distance between the target and the transmitter to be d = 10, 20, and 40 m.It is seen from Fig. <ref> that, NMSE decreases with the increase of SNR for all d, and the best performance is achieved when d = 10 m. When SNR is smaller than 12 dB, the NMSE is large and decreases slowly with the increase of SNR. In this stage, the differences among the NMSE for different d are not pronounced.The NMSE decreases rapidly when SNR increases from 15 dB to 20 dB. When SNR reaches certain thresholds, the NMSE decreases to the error floor and hardly changes.At this point, the restriction of RPCD reconstruction quality is no longer the noise at the receiver, but is the imperfect condition of the mutual coherence of the sensing matrix. As the target gets farther away from the transmitter, the mutual coherence of the sensing matrix becomes larger,leading to a larger error floor in RPCD reconstruction. Moreover, an error floor is reached at lower SNR levels. To illustrate the RPCD reconstruction results vividly, we present the reconstructed images of the targetbased on relative permittivity (or conductivity) when d = 20 m for SNR=10 dB and 30 dB, respectively. As shown in Fig. <ref>, both relative permittivity and conductivity reconstructed distribution can reproduce the general shape of the target.The RPCD reconstructed at SNR=30 dB is more accurate to demonstrate the target's shape compared to the RPCD reconstructed at SNR=10 dB. Moreover, the higher SNR value results in more accurate reconstructed value of relative permittivity and conductivity, which leads to better representation of the target's real EM property. §.§ Material Classification Accuracy versus SNRSuppose the target may be composed of 10 possible kinds of materials, such as wood, concrete, etc, whose relative permittivity and conductivity are precisely known.The classification between air and the target using the K-means clustering method when SNR=20 dB and d=20 m is shown in Fig. <ref>.In Fig. <ref>, each point presents a sampling point in the domain D. It is seen that, the cluster centroid of the air, representing the average permittivity and conductivity values of the air, is close to the (1,0) point.On the other hand, the cluster centroid of the target material, representing its average permittivity and conductivity, will be far from the (1,0) point. The material classification accuracy versus SNR with different d is shown in Fig. <ref>. It is seen that, the classification accuracy increases with the increase of SNR for all d, and the best performance is achieved when d=10 m. When SNR is smaller than 12 dB, the accuracy is small, and the differences among the classification accuracies for different d are not pronounced.The accuracy increases rapidly when SNR increases from 10 dB to 20 dB and reaches an upper bound at a certain threshold. Generally, the lower RPCD reconstruction error results in higher classification accuracy according to Fig. <ref> and Fig. <ref>. § CONCLUSIONIn this paper, we propose a groundbreaking material sensing scheme in ISAC systems by utilizing OFDM pilot signals.We develope an end-to-end EM propagation model grounded in Maxwell equations, enabling the reconstruction of relative permittivity and conductivity distributions within a defined detection region.The proposed multi-frequency-based material sensing method, employing compressive sensing techniques, and the optimized beamforming matrix design, are capable of achieving high-quality RPCD reconstruction and accurate material classification.This paper opens new possibilities for precise EM property sensing and material characterization in ISAC systems, with implications for applications demanding advanced material sensing capabilities.ieeetr | http://arxiv.org/abs/2312.16428v1 | {
"authors": [
"Yuhua Jiang",
"Feifei Gao",
"Shi Jin"
],
"categories": [
"eess.SP"
],
"primary_category": "eess.SP",
"published": "20231227062409",
"title": "Electromagnetic Property Sensing: A New Paradigm of Integrated Sensing and Communication"
} |
Modeling and Analysis of GEO Satellite Networks Dong-Hyun Jung, Hongjae Nam, Junil Choi, and David J. LoveD.-H. Jung is with the Satellite Communication Research Division, Electronics and Telecommunications Research Institute, Daejeon, 34129, South Korea (e-mail: [email protected]). H. Nam and D. J. Love are with the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907 USA (e-mail: [email protected]; [email protected]). J. Choi is with the School of Electrical Engineering, KAIST, Daejeon, 34141, South Korea (e-mail: [email protected]). January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Current parametric models have made notable progress in 3D hand pose and shape estimation. However, due to the fixed hand topology and complex hand poses, current models are hard to generate meshes that are aligned with the image well. To tackle this issue, we introduce a dual noise estimation method in this paper. Given a single-view image as input, we first adopt a baseline parametric regressor to obtain the coarse hand meshes. We assume the mesh vertices and their image-plane projections are noisy, and can be associated in a unified probabilistic model. We then learn the distributions of noise to refine mesh vertices and their projections. The refined vertices are further utilized to refine camera parameters in a closed-form manner. Consequently, our method obtains well-aligned and high-quality 3D hand meshes. Extensive experiments on the large-scale Interhand2.6M dataset demonstrate that the proposed method not only improves the performance of its baseline by more than 10% but also achieves state-of-the-art performance. Project page: <https://github.com/hanhuili/DNE4Hand>. § INTRODUCTIONRecent advances in parametric human models <cit.> have facilitated human-centric applications, such as artificial intelligence generated content, human avatars, and virtual talking heads. With parametric models like <cit.>, reconstructing 3D hand meshes from images becomes plausible and convenient. This attracts considerable attention and extensive research has been conducted to improve the accuracy and speed of the parametric model fitting process <cit.>.Nevertheless, reconstructing well-aligned hand meshes from single-view images is still challenging because of the following two reasons: (i) Challenging factors like depth ambiguity, self/inter-hand occlusions, and complicated hand motions hinder estimation accuracy. (ii) Even worse, the pre-defined hand topology in parametric models further restricts hand mesh deformations, and consequently the parametric models are not flexible enough to represent various hands.Non-parametric hand models <cit.> seem to be a possible solution for the above issues. With the powerful representation learning ability <cit.>, current non-parametric methods can predict mesh vertices directly. This provides great flexibility and in practice methods of this type usually yield better performance, compared with parametric models. However, without leveraging the hand structural prior, these methods are prone to producing severe artifacts and broken meshes. It is natural to consider combining parametric models and non-parametric models to leverage the structural advantages of the former and the flexibility of the latter. Methods belonging to this paradigm <cit.> have been proposed recently. However, as far as we are concerned, most current methods seek a deterministic manner, such as predicting the deviations of vertices and parameters <cit.>. This makes them hard to explore the solution space thoroughly. Note that recovering 3D hand meshes from monocular images is an ill-posed problem, which means multiple meshes can be associated with the same 2D observation. Therefore, a deterministic model may be ineffective to tackle this task.To tackle the above issues, we propose to tackle monocular hand mesh recovery in a probabilistic framework. Particularly, given a monocular input image, we adopt an off-the-shelf parametric model as the baseline to obtain the coarse hand meshes. We then refine the coarse hand meshes by jointly estimating the noise of vertices and their image-plane projections, since they are highly related. We design a progressive framework to do so, in which image-aligned features are leveraged to estimate the parameters governing the noise distributions. With the estimated distributions, we adopt the reparameterization trick to generate multiple samples and estimate their confidence. In this way, we can leverage the sample with the highest confidence to optimize the hand vertices and their projections. Moreover, given the refined vertices and their 2D coordinates, we also propose a closed-form solution to refine camera parameters. Consequently, our method can generate hand meshes that are aligned with images well. Our experiments on the Interhand2.6M dataset show that the proposed method can boost the quality of the coarse meshes significantly, and achieve the state-of-the-art performance.Our contributions can be summarized as follows:∙ To the best of our knowledge, this paper proposes the first probabilistic 2D and 3D noise estimation framework for the monocular hand mesh recovery task.∙ An effective network architecture is introduced to realize the dual noise estimation process. This network leverages image-aligned features and multiple samples to enhance the coarse meshes generated by baseline parametric models.∙ The proposed method is validated on the large-scale Interhand2.6M dataset and outperforms conventional methods. § RELATED WORK§.§ Parametric Hand Models Parametric models for 3D hand meshes have gained considerable attention because it provides the convenient structural/geometric prior of hands. Extensive methods have been proposed for fitting parametric models, such as attention modules <cit.>, inverse kinematic solvers <cit.>, hand disentanglement <cit.>, and 2D-3D projection <cit.>. The comprehensive review of parametric models can refer to <cit.>. Parametric models can be roughly divided into regression based methods and optimization based methods. Regression based methods estimate the parameters directly while optimization based methods usually involve an online optimization process. Parametric models are restricted by their pre-defined hand templates and are inflexible to model hands of various poses and geometry. §.§ Non-parametric Hand ModelsEarly non-parametric models aim at predicting hand joints from depth maps and point clouds <cit.>. With the recent advances in network architectures, non-parametric models that predict 3D hand vertices become popular. For instance, transformers and graph neural networks <cit.> have been proposed for mesh reconstruction. Since non-parametric models do not rely on the fixed hand topology, they are more flexible and easier to be aligned with images. However, without the structural prior of hands, non-parametric models also suffer from distorted and spiky reconstruction results. §.§ Hybrid Hand ModelsIt is reasonable to construct hybrid models and leverage the advantages of both parametric and non-parametric models. Several pioneering studies have been conducted to achieve this goal. For example, IntagHand <cit.> utilizes the topology of MANO to construct the graph representation of vertices. It also defines graph attention modules to model vertex dependencies.incorporate the MANO model into a point cloud network for pose estimation.propose to estimate joints first via non-parametric models and then infer MANO parameters based on joints.introduce a network to predict relative translation between two MANO hands. The proposed method differs from traditional methods because of its probabilistic unified modeling of vertices and their 2D coordinates. With the proposed method, we can achieve the mutual and progressive refinement between vertices and 2D coordinates. §.§ Implicit Hand Models Except for explicit representations, recent studies also try to explore implicit functions (e.g., signed distance function, ) to represent 3D hands. A notable advantage of implicit functions is that they are continuous and disentangled from spatial resolutions. This advantage indicates that implicit functions can generalize to arbitrary hands. Several implicit hand models have been proposed, such as LISA <cit.>, AlignSDF <cit.>, Im2Hands <cit.>, HandNeRF <cit.>, and Hand Avatar <cit.>. However, compared with explicit models, the computational cost of implicit models is more expensive. § METHODOLOGY §.§ Overall ArchitectureThe architecture of our method is shown in Figure <ref>. It consists of a coarse mesh fitting stage and a refinement stage. Particularly, given a single-view image as input, we adopt ResNet-50 <cit.> as the image encoder to extract the feature maps 𝐅 of size H × W × C. To better leverage the geometric and semantic information in the image, we adopt a 2D convolutional block to predict five auxiliary maps, including the depth map 𝐚_1∈ℝ^H × W, the normal map 𝐚_2∈ℝ^H × W × 3, the joint heat map 𝐚_3∈ℝ^H × W × 42 (each hand has 21 joints), the DensePose map <cit.> 𝐚_4∈ℝ^H × W × 3, and the part semantic map 𝐚_5∈ℝ^H × W × 34 (16 parts for each hand, plus one background class). We merge 𝐅 and these five auxiliary maps via channel-wise concatenation followed by another 2D convolutional block. For conciseness, we still denote the merged feature maps as 𝐅. 𝐅 are then fed into a baseline fitting model to predict the parameters of MANO, including the pose coefficient θ∈ℝ^16× 6 <cit.>, the shape coefficient β∈ℝ^10, and the intrinsic camera parameters 𝐜∈ℝ^2× 2 for each hand. Inspired by , we utilize 2D convolutions to predict parameter maps and accumulate the parameters via spatial softmax. With the fitted parameters, we generate the initial coarse hand meshes.To refine the coarse meshes and better align them with images, we introduce the dual noise estimation (DNE) module. The core of DNE is to conduct mesh refinement and alignment jointly in a denoising process. To this end, the DNE module first refines the image-plane projections of coarse vertices. Then the DNE module obtains the image-aligned features via interpolation and uses them to estimate 3D vertex deviations. Furthermore, based on the correspondences between 3D vertices and their 2D projections, the DNE module also adopts a closed-form solution to refine the camera parameters. The detailed architecture of the DNE module is presented in the next section. The above refinement process is conducted progressively via multiple DNE modules and in practice we find that more DNE modules yield more significant performance gains.§.§ Dual Noise Estimation Formulation. Given an arbitrary vertex v∈ℝ^3 and its corresponding 2D coordinate u∈ℝ^2, our proposed DNE module can be formulated as follows:Π (v + ε _3d, c) = u + ε _2d,where Π denotes the 2D projection of v given the intrinsic camera parameters c. Note that u is not necessarily obtained by Π. As we demonstrate later, u can be regressed from the image feature maps directly. ε _3d and ε _2d are the 3D and 2D noise terms that need to be estimated. To ensure our network is differentiable, we assume both ε _3d and ε _2d follow a certain distribution, on which we can apply the reparameterization trick <cit.>. In this paper, we adopt the Gaussian distribution to model ε _3d and ε _2d, namely,ε_3d∼𝒩(μ_3d, γ|μ_3d| + δ)ε_2d∼𝒩(μ_2d, γ|μ_2d| + δ)where γ, δ > 0 are hyperparameters that control the scale and margin of noise, respectively. Based on Eq. (<ref>), we can sample multiple ε _3d and ε _2d to better explore the solution space during training, and set ε _3d = μ_3d and ε _2d = μ_2d for inference. This makes the proposed method differ from traditional methods that only estimate 2D/3D deviations. Our task now turns to estimating appropriate μ_3d and μ_2d.μ_2d Estimation. Image-aligned features that are obtained via feature interpolation are leveraged to estimate μ_2d. Particularly, we consider two types of 2D coordinates in the interpolation process, including (i) the 2D projections of vertices (i.e., Π (v, c)) and (ii) those that are regressed directly from the image feature maps. The intuition behind such a combination is that features obtained by the first type can maintain the hand structure and be robust to outliers, while those of the second type are more flexible. To regress 2D coordinates from the image feature maps 𝐅, we reshape 𝐅 to (HW)× C and adopt two consecutive multilayer perceptrons (MLPs) to transform 𝐅 to N × C first and then N × 3, where N=778 is the number of vertices of one MANO hand. Let 𝐟_p and 𝐟_r denote the C-dimensional interpolated feature vector with the projected and regressed coordinates, respectively. We consider the following transformation ϕ: ℝ^2C→ℝ^2 to obtain the per-vertex mean of 2D noise:μ_2d = ϕ(𝐟_p, 𝐟_r).In our network, ϕ is implemented efficiently via feature concatenation followed by an MLP.μ_3d Estimation. We also extract image-aligned features from 𝐅 to estimate μ_3d. The updated 2D coordinate u + ε _2d is used for feature interpolation. Considering that image-aligned features might be insufficient in tackling depth ambiguity and occlusions, here we propose a simple yet effective method to alleviate this problem. As shown in Figure <ref>, we first create a voxel grid from the hand meshes (with normalized 3D coordinates), so that features of spatially closed vertices can be aggregated into the same voxel. We then conduct max pooling along each of the three axes of the grid, to obtain the three-view (front, lateral, and top) projections of voxel features. The above multi-view feature projections help to achieve finer feature disentanglement compared with the single-view representation. Similar to Eq. (<ref>), the per-vertex mean of 3D noise can be estimated via a transformation φ: ℝ^3C→ℝ^3 as follows:μ_3d = φ(𝐟_front, 𝐟_lateral, 𝐟_top).We also adopt an MLP to implement φ. Note that except for the MANO model, we do not impose any other constraint on the topology of vertices. This allows us to seamlessly adopt architectures that are more complicated than MLPs (e.g., Graph attentions, ) to realize ϕ and φ. Camera Correction. Last but not least, the updated v and u are used to refine the intrinsic camera parameters. We adopt the orthographic camera model and hence Π (v, c) can be defined as follows:Π (v, c) = sv(x,y) + t,where v(x,y) denotes the x and y coordinates of v. s=(s_x, s_y) and t = (t_x, t_y) are the scaling factors and the principle point translations of the camera from the normalized device coordinate space to the image space[<https://pytorch3d.org/docs/cameras>]. Hence c can be represented as c = [ [ s_x,t_x; s_y,t_y ]] and the projection process in can be written as u_x = s_xv_x + t_x, u_y = s_yv_y + t_y. Substituting Eq. (<ref>) into Eq. (<ref>), we estimate the intrinsic camera parameters by minimizing the following equation:Σ _n = 1^N||u_n - sv_n(x,y) + t||_2^2 + ξ (||s||_2^2 + ||t||_2^2),where ξ > 0 is a hyperparameter for regularization. Here we omit ε _3d and ε _2d and use v and u to denote the updated vertex and its 2D coordinate. Eq. (<ref>) can be solved via ridge regression <cit.>, which has the following closed-form solution: c' = (V_^⊤V_ + ξI)^-1V_^⊤U_,where V, U∈ℝ^N × 2 are the matrix representation of all vertices and their 2D coordinates, I is a 2 × 2 identity matrix. §.§ Optimization The proposed network is fully differentiable and is trained via minimizing the following loss function:L = L_aux + L_MANO + L_v,where L_aux denotes the loss for the five auxiliary tasks. L_aux is formulated as follows:L_aux = Σ _i = 1^4 λ_i||a_i - a_i^g||_1 + λ_5CE(a_5, a_5^g),where ||·||_1 is the l_1 loss and CE is the cross-entropy loss. Variables marked with the superscript g are ground truths. L_MANO is the loss term defined on the shape and pose parameters of MANO:L_MANO = λ_β||β - β^g||_1 + λ_θ||θ - θ^g||_1.L_v targets at mesh vertices and their 2D projections and is defined as follows:L_v = λ_3DΣ_m = 1^M Σ_r = 1^R ||v_m,r - v^g||_1+λ_2DΣ_m = 1^MΣ_r = 1^R ||Π (v_m,r,c_m) - Π (v_m,c^g)||_1,where M is the number of DNE modules and R is the number of random samples in each DNE module. λ _1, ..., λ _5, λ_β, λ_θ, λ_2D, λ_3D are user-specified loss weights. § EXPERIMENT§.§ Dataset and Evaluation MetricsDataset Our experiments are conducted on the large-scale Interhand2.6M dataset <cit.>, which consists of about 1.3M training images and 0.8M test images. We use all single-hand (SH) and interacting-hand (IH) images in the training set for training. All images are cropped and resized to 256×256 based on the bounding boxes provided by the dataset. We adopt the widely-used mean per-joint position error (MPJPE) and mean per-vertex position error (MPVPE) as the evaluation metrics.§.§ Implementation DetailsOur networks are trained with 4 GeForce RTX 4090 graphics cards. We adopt the Adam optimizer <cit.> with a batch size of 120 and 30 training epochs. Each epoch takes about five hours. The initial learning rate is 10^-3 and is scaled by 0.5 after each epoch until it reaches 10^-6. Detailed settings of hyperparameters are given in the supplemental material on our project page. We adopt several data augmentation methods to improve the generalization ability of the proposed network following <cit.>, including random image shifting in [-10, 10] pixels, random rotations in [-90, 90], random resizing with scaling factor in [0.9, 1.1], horizontal flipping, and adding Gaussian noise ∼𝒩 (0, 0.3) to images. §.§ Comparison with State-of-the-artsTable <ref> reports the MPJPE performance of the proposed method against cutting-edge methods. As we have emphasized above, our dual noise estimation process does not have any other constraint besides the MANO topology, and hence it can be combined with different methods. To validate this, we implement two network variants, i.e., the one with the proposed multi-view MLPs (denoted as Ours-MLP) and the other one with graph attentions (, denoted as Ours-GraphAttn). From Table <ref> we can see that these two variants achieve state-of-the-art performance by reducing the MPJPE on the Interhand2.6M dataset from 9.63 to 8.78 (Ours-MLP) and 8.40 (Ours-GraphAttn). This indicates that the proposed dual noise estimation is a considerable strategy for current hand mesh recovery methods.Figure <ref> demonstrates the visual examples of interacting hand meshes reconstructed by our method (Ours-MLP) and the state-of-the-art IntagHand method <cit.>. It is reasonable that IntagHand can generate well-aligned results because it is a non-parametric model. However, without the geometric prior of hands, it still exhibits artifacts near the fingertips or inaccurate 2D projections (e.g., the middle example in Figure <ref>). On the contrary, the proposed method adopts the coarse-to-fine paradigm, in which the coarse but relatively reliable hand meshes are exploited and refined progressively. Hence the reconstruction results of the proposed method are more stable and accurate. §.§ Ablation Study To provide a comprehensive analysis of the proposed method, we conduct several ablation experiments in this section. Considering that network training on the whole training set of Interhand2.6M is computationally expensive, we only use 10% of training data in our ablation studies, and the whole test set is used for evaluation. Other experimental settings remain unchanged in our ablation studies.Effects of 3D noise estimation. We first evaluate the effectiveness of 3D noise estimation. This experiment considers three variants of the proposed method: the parametric fitting baseline, the baseline augmented with 3D noise estimation (denoted as + ε_3d), and the full DNE module. The experimental results are reported in Table <ref>. + ε_3d brings notable performance gains to the baseline, as it reduces the MPVPE of the baseline on the sing-hand subset/interacting-hand subset/whole test set from 11.45/14.51/12.53 to 10.02/12.89/11.02. This suggests that the proposed multi-view MLP based 3D noise estimation module is effective. The full DNE module further improves the performance of the baseline and outperforms the variant with 3D noise estimation only on all test subsets. We owe this to the proposed 2D noise estimation and camera correction methods, as they help to obtain features that are more aligned with images. Effects on 2D noise estimation We are also interested in the effects of each proposed component on 2D predictions, as generating well-aligned results is one of the major goals of this paper. Besides the three variants used in the previous experiment, we consider another variant that estimates 2D noise only (denoted as + ε_2d). In this ablation study, we use 2D MPVPE as the metric, and the results of these four variants are summarized in Table <ref>. We observe that both + ε_2d and + ε_3d outperform the baseline and the performance margin of the latter is more obvious. This is reasonable, as the conceptual field of a vertex in the 3D noise estimation process (max pooling on three-view feature maps) is larger than that in the 2D case (only features interpolated with the projected and the regressed coordinates). Consequently, the 3D noise estimation module can leverage more information for refinement. Utilizing 2D and 3D noise estimation jointly achieves the best performance. The full DGE module reduces the MPVPE of the baseline by more than 30%/25%/27% on the three test sets, respectively. These results are sufficient to validate that the proposed method boosts the performance of the baseline in image-plane alignment successfully. Effects on the number of DNE modules. At last, as we have mentioned above, the mesh refinement process can be conducted progressively via multiple DNE modules. To verify this, we compare the performance of using a single DNE module (M=1) and that of using three DNE modules (M=3). The experimental results are reported in Table <ref>. These results validate that higher performance gains can be obtained with more DNE modules. Visual comparison of mesh refinement. The visual comparison between the meshes before and after refinements is shown in Figure <ref>. We can see that the meshes refined by the proposed method are more accurate. This again validates that leveraging the advantages of parametric models and non-parametric models is considerable. Visual comparison of camera correction. We also compare the reconstruction results of the proposed with and without the camera correction in Figure <ref>. From this figure, we can see that leveraging camera correction helps to generate better results.§ CONCLUSIONIn this paper, we propose a novel method leveraging dual noise estimation to recover 3D hand meshes from single-view images. Our method models the noise of mesh vertices and their projections on the image plane in a unified probabilistic model. We implement the proposed framework via an end-to-end trainable network with two effective estimation branches. Furthermore, our framework can also refine the intrinsic camera parameters efficiently via ridge regression. Consequently, our method can generate hand meshes that are well-aligned with images. Our experiments and ablation studies on the Interhand2.6M dataset demonstrate the effectiveness of our method.Our current method is not designed especially for single-hand or interacting-hand images. In the future, we plan to incorporate a cross-hand noise model to further enhance the proposed method. We will also consider other association strategies for vertices and their 2D coordinates, such as differential neural rendering.§ ACKNOWLEDGMENTS This work was supported in part by National Key R&D Program of China under Grant No. 2020AAA0109700,Guangdong Outstanding Youth Fund (Grant No. 2021B1515020061), National Natural Science Foundation of China (NSFC) under Grant No. 61976233, No. 92270122, No. 62372482 and No. 61936002, Mobility Grant Award under Grant No. M-0461, Shenzhen Science and Technology Program (Grant No. RCYX20200714114642083), Shenzhen Science and Technology Program (Grant No. GJHZ20220913142600001), Nansha Key R&D Program under Grant No.2022ZD014 and Sun Yat-sen University under Grant No. 22lgqb38 and 76160-12220011. | http://arxiv.org/abs/2312.15916v1 | {
"authors": [
"Hanhui Li",
"Xiaojian Lin",
"Xuan Huang",
"Zejun Yang",
"Zhisheng Wang",
"Xiaodan Liang"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231226072101",
"title": "Monocular 3D Hand Mesh Recovery via Dual Noise Estimation"
} |
[ Some things are more Cringe than others: Preference Optimization with the Pairwise Cringe LossJing Xuyyy Andrew Leeyyy Sainbayar Sukhbaataryyy Jason WestonyyyyyyMetaJing [email protected] Learning, ICML0.3in ] Practitioners commonly align large languagemodels using pairwisepreferences, i.e., given labels of the type response A is preferred to response B for a given input. Perhaps less commonly, methods have also been developed for binary feedback,i.e. training models given labels of type response A is good or bad. We show how an existing performant binary feedback method, the Cringe Loss <cit.>, can be generalizedto the pairwise preference setting using a simple soft margin extension.0 The Cringe Loss has an advantage over other binary feedback techniques, such as Unlikelihood training, that when it pushes down the probability of negative tokens, it does not tend to inadvertently push up the probability of low quality tokens, due to the formulation of the loss function. Hence, the hope is that when this model is generalized to the preference learning case, these useful properties will be carried across as well. Pairwise Cringe Loss is straightforward toimplement and efficient to train, and we find it outperformsstate-of-the-art preference optimization algorithms such as PPO and DPO on the AlpacaFarm benchmark.§ INTRODUCTION Aligning large language models (LLMs) after pre-training can give large gains intheir performance for downstream tasks for users <cit.>. Exactly how to implement this alignment depends on the labels one collects. Given positive examples of correct behavior one can performsupervised fine-tuning (SFT) using standard likelihood-based training. Given both positive and negative examples (binary feedback),one can use methods such as unlikelihood training on the negative examples <cit.>, orthe more performant Cringe Loss <cit.>. However, a more common approach than using binary feedback, popularized by work such as<cit.> is to collect pairwise preferences of the type response A is better than response B for a given input. In this case one can use methods suchas PPO <cit.>, DPO <cit.> and other variants.In this work we seek to compare SFT,binary feedback and pairwise preference algorithms, and to ask the question: can one convert existing binary feedback algorithms to use pairwise preference data? In particular the Cringe Loss is a method for binary feedback, which weshow can be generalized to the pairwise preference case. The Cringe Loss works as follows:positive examples use the standard likelihood training loss, while for a given negative example it contrasts each token in the negative sequence against other likely tokens – to encourage the negative sequence to no longer be the top-ranked sequence.After training on the initial feedback data, the method isthen iterated by labeling data using the improved model, which was shown toimprove results further. Cringe Loss was shown to perform well with binary feedback data compared to competing methods, such as SFT, unlikelihood loss and best-of-N reranking <cit.> and for improving large-scale dialogue systems <cit.>.However, collecting and using pairwise preferences for training has currently proven a more popular approach to developing aligned LLMs.We thus explore generalizing the Cringe Loss to the pairwise preference setting. We hence develop the Pairwise Cringe Loss,by using a differentiable margin-based loss on the pair of responses. In particular, we add a margin-based multiplier to the Cringe Loss to turn it on or offdepending on how much probability distance is between the pair. When the preferred response A becomes much more likely than the response B, the Cringe Loss is turned off so that the model capacity is better spent on pairs that are closer in probabilities.We experimentally compare competing approaches, including binary and pairwise variants of Cringe Loss.The first task is to reduce repetitions <cit.>, which can be measured accurately so it gives us more control. In this task, we find that Pairwise Cringe outperforms Binary Cringe,and has performance similar to DPO, while the Pairwise Cringe generations have slightly better quality. Next, we employ a more realistic setup using the AlpacaFarm <cit.> benchmark that provides pairwise preference data for general instruction following. Pairwise Cringe Loss again outperforms the Binary Cringe variant, in addition to SFT, and more importantly outperforms state-of-the-art methodsDPO and PPO.Pairwise Cringe Loss is simple to implement and efficient to train,and is therefore a strong candidate for training instruction tuning and other alignment tasks. § PREFERENCE LEARNING WITH THE CRINGE LOSSWe first review the binary feedback-based (standard) Cringe Loss, and thenintroduce its generalization to the pairwise preference learning case. §.§ Standard Cringe Loss The Cringe (ContRastive Iterative Negative GEneration)Loss is an approach developedfor the binary feedback learning case, given two sets of sequences: positive sequences y, and negative sequences y. It is common for them to be responses to specific input sequences: x→ y, x→ y, i.e., given prompts or instructions x. Note that the positive and negative labels only apply to the response portions.The optimization objective consists of two terms: the cross entropy loss for the positive sequences and the Cringe Loss for the negative sequences. The former is used as standard, i.e., for all tokens y_t from a positive sequence y: ℒ_CE(x,y) = -log p( [x, y] ) = -log p(x) - log p(y | x) . This will increase the likelihood of generating the positive responses. Note that the loss included input tokens x, but we can choose to only train on (update) the response portion y as well.For a given negative sequence y, the Cringe Loss contrasts each negative token y_t in the sequence against a positive token. It was argued in <cit.> that methods such as Unlikelihood <cit.> which simply push down the probability of negative tokens mayinadvertently push up the probability of low quality or rare tokens for that sequence position, becausethere is no control over that effect. The Cringe Loss controls for this with its contrastive loss whichinstead encourages an alternative highly likely token to replace a given penalized token. However, in the training data one is typically provided a negative sequence, but one does not know for any given negative token in the sequence what an alternative positive token should be.The Cringe Loss thus proposes to sample an alternative positive token from the model's current top-k predictions (omitting the negative token, if it is in the top-k so that the same negative token is not chosen as the positive example). Let s_t[i] be the model output score (input to the final softmax) at time t corresponding to token i. First we select top-k scores {s_t^1, ..., s_t^k} from all scores s_t[i] excluding the negative token s_t[y_t]. Next we sample according to the categorical distribution constructed through the softmax over these top-k scoress_t^* ∼Softmax(s_t^1, ..., s_t^k) .Now we can use s_t^* as an alternative positive token. The contrastive loss is then:ℒ_Cr(x,y) =- ∑_t logexp(s_t^*)/exp(s_t^*) +exp(s_t[y_t]),which pushes down the negative token score to be smaller than the selected positive token. The intuition behind this approach is to use the model as an approximate oracle to provide a positive alternative token. Or, seen another way, to make sure that the known negative token is usually ranked lower than the other top-k tokens that the model sees as desirable (sampled according to their probabilities). This process can be seen in the right portion of <ref> where negative token “discharge” is contrasted against a sampled positive token “absorb”.The final standard (binary feedback) Cringe Loss objective function for a single iteration is thus: ℒ_Bin(x, y, x, y) = ℒ_CE(x, y) + αℒ_Cr(x, y)where α is a tunable hyper-parameter that controls the impact of the negative examples. Iterative Training The negative sequences used for training either come from (i) human annotations, or (ii) access to a classifier (e.g., trained from the human annotations), which can hence be seen as a reward model. The latter can be used to iteratively label the model’s own generations and apply the Cringe Loss to those examples as well. <cit.> and <cit.> showed these iterations improve the models further.0 This is applied in an iterative manner: * Train with the Cringe Loss on the original preference data.* Generate new responses with the newly trained model.* Label those responses (positive and negative) with the reward model.* Train with the Cringe Loss on a combination of the original preference data and the newly labeled data.Steps 2-4 can be repeated multiple times, and <cit.> and <cit.> showed these iterations improve the models further.§.§ Pairwise Cringe Loss When given pairwise preference data, we are provided with samples of the form (x → y^w, y^l), where the “winning” sequence y^w has been labeled as preferred compared to the “losing” sequencey^l for the same input x. For example, in instruction tuning tasks, such data is typically presented as two responses to the same instruction x, where one is preferred to the other as more helpful and/or harmless.Let us define a margin between the two responses asM(x, y^w, y^l) = log p(y^w|x) - log p(y^l|x). A negative margin means that the model is more likely to generate the losing sequence than the winning sequence, which is undesirable. In that case, we can employ the binary Cringe Loss from <ref> to push down the loser sequence probability while pushing up the winner sequence. In contrast, when the margin is sufficiently large, the losing sequence is much less likely, so it becomes less important to push them apart anymore. Therefore, we construct a loss that applies the binary Cringe Loss only when the margin is small using a sigmoid gate:g(x, y^w, y^l) = σ([b - M(x, y^w, y^l)] / τ) ℒ_Pair(x,y^w, y^l) = g(x, y^w, y^l)ℒ_Bin(x,y^w,x,y^l).Here, the gating function g uses sigmoidσto smoothly switch off the binary Cringe Loss for larger margins. Its temperature τ controls the smoothness of this transition, while the bias b determines how large a margin needs to be for the binary Cringe Loss to switch off. For example, a small b value means the gating will turn offeven for small margins, thus the loss will be less aggressive in pushing the pairs apart.In our experiments, we willalso compare it against a hard step function (a so called “hard margin”, rather than a “soft margin”).Note that the gradient from the loss in <ref> has two pathways. The first one goes through the sigmoid multiplier and will act to increase the margin, which only depends on the sequence-level probabilities. The second gradient pathway is through the binary Cringe Loss and operates on token-level probabilities. Therefore, this loss can be viewed as combining elements ofmethods like DPO and PPO that operate only on sequence-level probabilities, and methods like Cringe and Unlikelihood that manipulate token-level probabilities – while extending those latter methods to the pairwise preference case.We note that we did not add a KL regularization term to our training objective, as is used in several other methods <cit.> – as we found experimentally our method already performswell, and did not display issues of degradation without this term. However, it is possiblein certain settings adding such a term could improve performance, and hence could be considered.We give an overall summary of the loss in <ref>. Code for implementing the Pairwise Cringe Loss is given in <ref>.Iterative Training Like DPO, Pairwise Cringe Loss can be trained without a reward model given pairwise preference data using the recipe described above, which is the first iteration of Cringe training. However, like binary Cringe Loss,we can employ Pairwise Cringe Loss to perform iterative training.Our overall training approach, which we call Pairwise Cringe Optimization (PCO), is summarized in <ref>. Given a reward model that predicts preferences, the method is applied in an iterative manner: * Train with the Pairwise Cringe Loss on the original preference data.* Generate new responses with the newly trained model (multiple responses per prompt x).* Label those responseswith the reward model, and choose new preference pairs. * Train with the Pairwise Cringe Loss on a combination of the original preference data and the newly labeled data.Steps 2-4 can be repeated multiple times, however in our experiments in this paper we only perform these 4 steps (which we call 2 iterations).To construct pairs we generate N=4 responses per input, and then choose the best and worst scoring as a single pair using a scalar reward model (that assigns scores individually per response), discarding the other generated responses. However, other methods for assigning pairs are certainly possible that we have not explored. § EXPERIMENTSWe first conduct experiments in <ref> on a repetition mitigation task from<cit.> in order to compare Pairwise Cringe Loss to the original Cringe Loss, as wellas comparing to DPO and other methods. We then compare againstpreference optimization methods for general instruction tuning, including comparing to PPO and DPO, on the AlpacaFarm benchmark in <ref>. §.§ Reducing RepetitionsTraining Datasets and Process Model-generated completions exhibit sequence-level repetitions, especially with deterministic decoding <cit.>.Pairwise Cringe is trained by first supervised fine-tuning GPT2-Medium <cit.> on the BASE data which is a large web-based corpus <cit.> to predict the next sentence. To construct preference data to reduce repetitions one then labels the generations automatically according to whether they contain repeating n-grams or not.We generate pairs of outputs from the supervised finetuned GPT2 SFT model using beam blocking decoding (to ensure there are no repetitions) and greedy decoding (which may contain repetitions), and only keep the pair if the generation by greedy decoding contains at least one repeating n-grams (either in the generated sequence itself or a repeat of the context). The pairwise preferences then use the beam blocked generation as the “winning” preferred output, and the greedy decoding with n-grams repeats as the “losing” less preferred output. In our experiments we fix n=3. We collect intotal49,285 pairsfor thistask. We also train DPO using the same procedure,as well as Binary Cringe which treats the pairwise preferences as good and bad directly (rather than as a pair, see <ref>).After training, we then generate from a given model using greedy decoding on the BASE test set, and the number of repeating n-grams in the generation (either in the generated sequence itself or a repeat of the context) is measured, as well as F1against the human responses, in order to measure quality.ResultsResults are given in<ref>, where the human baseline Repeat@3-gram (by measuring on the responses in the dataset) is 0.892,whilst the GPT2 SFT model has serious repetition issues for the same contexts, obtaining aRepeat@3-gram of 15.21 (meaning on average there are 15 n-gram repeats per response), and an F1 of 0.1165. Binary Cringe, DPO and Pairwise Cringe all significantly improve over the SFT baseline model in terms of repetitions, with DPO and Pairwise Cringe providing Repeat@3-gram values close to the human baseline, and Binary Cringe slightly trailing. In terms of F1,Pairwise Cringe outperforms Binary Cringe significantly, and is slightly higher than DPO as well.DPO andPairwise Cringe provide F1 higher than the SFT baseline, whereas Binary Cringe does not.Both Binary and Pairwise Cringe are run with two iterations, following <ref>. We can also evaluate the performance of the iteration 1 models.Iteration 1 of Binary Cringe yields a Repeat@3- gram value of 1.18 and F1 of 0.1125. Iteration 1 of Pairwise Cringe yields a Repeat@3-gram value of 1.39 and F1 of 0.1236. Hence, for both models iteration 1 has worse F1 than the final models. §.§ AlpacaFarm Evaluation AlpacaFarm <cit.>is a framework for benchmarkingalignment algorithms that learn to follow user instructions.It provides training data in the form of pairwise preferences over responses given to general instruction following tasks. Additionally, it comes with an automatic evaluation procedure using LLMs that was shown to have high agreement with human annotators.This framework has been provided in order to evaluate state-of-the-art methods(PPO, DPO, best-of-n, expert iteration, and more) – and to compare them to new methods in a controlled environment. In the existing results reported, several of those state-of-the-art methodsthat learn frompreferences are shown to substantially outperform supervised fine-tuning.Training Datasets and Process In line with the training procedure of the benchmark PPO method with human data previously trained in AlpacaFarm <cit.>, we leverage the pairwise human preference annotations provided by AlpacaFarm, as well as the identical train sets used in its different RLHF stages:* SFT data: 10k instruction-following demonstrations (x, y) intended for supervised fine-tuning the base LLM to be used in subsequent steps. * Pairwise-Preference (PREF): 10k instructions with pairwise human feedback data (x, y^w, y^l) collected as part of AlpacaFarm. We note that to compare to standard (binary) Cringe Loss, we also convert Pairwise preferences to binary feedback by assigning a positive label to preferred outputs anda negative label to less preferred ones.* Unlabeled: 20k unlabeled instructions x without any responses. We use these for the training iterations of Pairwise Cringe, see <ref> (bottom).As with the AlpacaFarm baselines we compare against, we start with a Llama-7b model supervised finetuned on the SFT set of the instruction-following demonstrations. We then take pairs of human preferences from the PREF set and further finetune the SFT 10k model with different losses for the models we compare, yielding DPO,Binary Cringe and Pairwise Cringe.For the Cringe models the iterative training is performed using the simple strategy described in <ref>. We start by using the model trained on the PREF set (which we call iteration 1) to generate k responses for each prompt from the Unlabeled set. These are scored using the provided AlpacaFarm reward model “reward-model-human” used in AlpacaFarm RLHF training. We then train the second iteration using both the PREF dataset and the newly derived preferences from the Unlabeled set. For both iterations, we start training from the model finetuned on the SFT data. Here we fix k=4. Evaluation Dataset During evaluation, we follow the AlpacaFarm evaluation setup which employsLLM-based evaluation, whichselects the superior of two model outputs over 805 prompts, and reports the overall win rates of candidate models against the Davinci-003 model outputs. The 805 instructions in AlpacaFarm evaluation set are sourced from Open Assistant, Anthropic, Vicuna, Koala and self-instructevaluations to test models' abilities of following general user instructions. These simulated win rates have shown to have high agreement with human annotations validated by 20k annotations <cit.>.In our experiments, we report results averaged over 3 seeds. Main Results Our main results are given in <ref>. As reported in <cit.>, SFT training alone obtains a win rate of 36.7 (SFT 10k), or even when training with 52k examples, only improves to a win rate of 39.2 (SFT 52k). These results are outperformed by all the pairwise preference optimization approaches using human preference data. We report the result for the existing AlpacaFarm PPO model trained on human preferences, which yields a win rate of 48.5.This outperforms Binary Cringe, which obtains 47.7.DPO outperforms both of those methods, achieving 50.2. However, Pairwise Cringeobtains the best performance, with a win rate of 54.7. §.§.§ Ablations and further results In <ref> we provide additional results. Pairwise Cringe outperforms Binary Cringe First, we find that Pairwise Cringe comfortably outperforms Binary Cringe, which uses the same pairwise preferences converted to binary feedback, for both training either 1 or 2 iterations. For example, in iteration 1 of training Binary Cringe obtains a win rate 45.9, while Pairwise Cringe obtains 52.0.Soft Margin outperforms Hard Margin Cringe Second, for Pairwise Cringe training, we find that a soft margin using a sigmoid gate outperforms a hard margin (win rate52.0 vs. 47.8) in the first iteration of training, and similarly is better in the second iteration of training as well (win rate 54.7 vs. 49.9). We speculate this is due to the provided gradient available during training in the soft margin case. Iterations of Cringe training improve performance Third, we find that iterations ofCringe improve its win rate. The first iteration of Pairwise Cringe has a win rate of 52.0, while the second iteration has a win rate of 54.7. Hard Margin Cringeand Binary Cringe both also benefit from iteration, e.g. an improvement of win rate from 45.9 to 47.7 for Binary Cringe, but both still lag behind Pairwise Cringe.Iterative DPO Improves over DPO DPO can also benefit from iteration, which is not the prescribed approach in the original paper. We find performing a second iteration of DPO in the same manner as we perform for our Cringe results(see <ref>), which we call Iterative DPO, results in an improved win rate from 50.2 to 53.6. However, this is still lower than the performance of iteration 2 of Pairwise Cringe with 54.7. Pairwise Cringe performs well on both Human and Simulated Preferences While we used human preferences supplied by AlpacaFarm for the experiments so far reported, the original paper also used simulated preferences constructed by LLMs, and reports results for various models with those as well. Results are shown in <ref>, bottom 5 rows, reporting the numbers from <cit.>for PPO, DPO and Best-of-N.We first trained a single iteration of Binary Cringe andPairwise Cringe in this setting. Binary Cringe obtains a win rate of 45.6, lagging just behind DPO andPPO (both with 46.8) and slightly ahead of Best-of-N (45.0). Pairwise Cringe (first iteration) provides strong performance, with a win rate of 50.6 – superior to all other methods tested. Training Pairwise Cringe for a second iteration then improves this further, to a win rate of 54.5. Impact of HyperparametersThe hyperparameters used in the experiments are given in <ref>.Common to both standard Cringe and Pairwise Cringe, we find the parameter α is best as being relatively small, in the0.005-0.01 range.Like the β parameter in DPO, we find the parameter τ that scales the loss is important to be at the right magnitude, 1-10 in our experiments. The parameter b on the other hand tends to be somewhat less important, but can still give gains from being nonzero. For example, using b=0in iteration one for Pairwise Cringe Loss gives a win rate of 50.9, compared to 52.0 for b=-10.§ RELATED WORK Classical large language modellearninginvolves only positive training examples, i.e. modeling the language provided <cit.>. However, if the sequence data distribution is closer to the intended usage then results improve. This motivates pre-training followed by fine-tuning settings where the fine-tune data is also positive examples, but closer to the downstream domain of interest, e.g. dialogue or other tasks <cit.>.Positive example sequences alone, however, do not take into accountinformation about what a model should not do, which can be captured, amongst other methods, from human feedback. Human feedback is collected via a user interface (UI), where the type of UI dictates the format of the feedback.For example, clicking a thumbs up or down button given a model response <cit.>provides binary feedback data, i.e. good or bad.More commonly collected for instruction tuning tasks however, are pairwise preferences indicating response A is preferred to response B <cit.>.The type of data collected dictates the kind of training algorithm that is then used. Learning from ranked preferences can be traced back throughout machine learning history. For example, in the Support Vector Machine (SVM) era, <cit.>developedtechniques for learning from ordered preference data using a pairwise margin-based approach. A number of works were developed using related ranking techniques for user preference data, e.g. from web clicks for information retrieval <cit.>. While many SVM approaches use a hard-margin (Hinge loss), others explored the use of a soft margin, e.g. a sigmoid-type loss as well <cit.>. More recent work, in the deep learning and large language modeling eras,has focused on viewing preference learning in a reinforcement learning setting. Typically, a reward model is trained from preference data, and then methods such asProximal Policy Optimization (PPO) <cit.>are applied to fine-tune the language model. Several released models have followed this recipe <cit.>.Since then, several other competing approaches have been proposed in particular Direct Preference Optimization (DPO) <cit.> which does not require a separate reward model in the loop. Recent models have also been built using DPO <cit.>. Other models using hard-margin based pairwise approaches have been proposed, e.g. SliC <cit.>, CLICK <cit.> and RRHF <cit.> – although there is some evidence that the hard margin approach is inferior to DPO <cit.>. A number of papers have also studied the best way to construct preference pairs, where better methodscan result in much improved win rates <cit.>. Separately, for binary feedback rather than pairwise preferences, several methods have been proposed that aim to train language models using only positive and negative (good and bad) examples.The unlikelihood training method <cit.>, which lowers the probability of generating negative examples, was shown to decrease repetition, copying, and other generation flaws.The Cringe Loss <cit.>, itself a generalization of<cit.>, was shown to outperform unlikelihood and several other approaches.For that reason, we chose to study generalizing the Cringe Loss to the pairwise preference case.§ CONCLUSION We introduced the Pairwise Cringe Loss for training language models with pairwise preferences. We showed that this approach outperforms its binary feedback counterpart the Cringe Loss, and importantly also outperforms competing state-of-the-art preference optimization algorithms on the AlpacaFarm benchmark such as PPO and DPO. Our method is efficient and simple to implement and we expect it can be applied to a wide range of problems. We note that our approach can also be used naturally with a mixture of both binary feedback and pairwise preferences if they are available by simply using both versions of the loss (binary Cringe, and Pairwise Cringe) at the same time for the two types of data, making it a versatile choice for end user applications. icml2024§ APPENDIX §.§ Algorithm Details.98[language=Python, caption=Python code for the Cringe Loss., label=python_code] class CringeLoss(CrossEntropyLoss): def __init__(self, alpha=1.0, k=1, **kwargs): super().__init__(**kwargs) self.alpha = alpha self.k = kdef __call__(self, x, y, classifier_labels, **kwargs): # Compute the CrossEntropy loss for the positive labels and mask # with classifier labels to not train with negative feedback (0) ce_loss = super().__call__(x.flatten(0, 1), y.flatten(0, 1), **kwargs) cr_loss = self._compute_cringe_loss(x, y, self.k) notnull = y.ne(self.ignore_index)# Remove loss from ignore index. ce_loss *= (classifier_labels * notnull).flatten(0, 1) cr_loss *= (torch.abs(classifier_labels - 1) * notnull).flatten(0, 1)# Compute final loss. loss = ce_loss + self.alpha * cr_loss return loss, ce_loss, cr_loss@staticmethod def _compute_contrastive_loss(self, x, y, k, **kwargs) # compute the contrastive loss part for the negative labels # first, get the positives as the top predictions != target preds = torch.topk(x, k=k + 1, axis=-1) topk_has_tgt = torch.eq( preds.indices, y.unsqueeze(-1).repeat(1, 1, k + 1), ) topk_logits = preds.values - topk_has_tgt.long() * 1e10# if the positive is not in the first k predictions, mask out # the final (k+1)'s logit topk_logits[:, :, -1] -= ( torch.logical_not(torch.any(topk_has_tgt, dim=-1)).long() ) * 1e10# Sample from the categorical distribution of the top-k predictions # (with the label masked out). preds_dist = Categorical(logits=topk_logits) idx_sample = preds_dist.sample() sample_preds_values = preds.values.gather( dim=-1, index=idx_sample.unsqueeze(-1) ).squeeze(-1)# Concatenate the logits of the preds with the negative label's logits. x_negative_target = x.gather(dim=-1, index=y.unsqueeze(-1)).squeeze(-1) x_cr = torch.concat( [sample_preds_values.unsqueeze(-1), x_negative_target.unsqueeze(-1)], -1 ) # Create the y's for the x_cr (the correct label is always index 0). y_cr = torch.zeros_like(y).to(x_cr.device)# Compute the Cringe Loss as cross entropy loss between x_cr, y_cr return F.cross_entropy(x_cr.flatten(0, 1), y_cr.flatten(0, 1), reduction="none").98 [language=Python, caption=Python code for the Pairwise Cringe Loss., label=python_code] class PairwiseCringeLoss(CrossEntropyLoss): def __init__(self, alpha=1.0, k=1, b=0, tau=1.0, **kwargs): super().__init__(**kwargs) self.alpha = alpha self.k = k self.b = b self.tau = tau assert tau > 0def _get_logprob(self, x, y, mask): # x: bsz * seqlen * vocab # y: bsz * seqlen # mask: bsz * seqlen token_logps = torch.gather( x.log_softmax(-1), dim=2, index=y.unsqueeze(2), ).squeeze(2) return (token_logps * mask).sum(dim=-1) / (mask.long().sum(dim=-1) + 1e-10)def __call__(self, x, y_w, y_l, **kwargs): # Compute the CrossEntropy loss for the positive labels ce_loss = super().__call__(x.flatten(0, 1), y_w.flatten(0, 1), **kwargs) cr_loss = CringeLoss._compute_cringe_loss(x, y_l, self.k) mask_l = y_l.ne(self.ignore_index) mask_w = y_w.ne(self.ignore_index) cr_loss *= mask_l.flatten(0, 1) ce_loss *= mask_w.flatten(0, 1)# Compute pairwise margin and gate multiplier margin = self._get_logprob(x, y_w, mask_w) - self._get_logprob(x, y_l, mask_l) sigmoid_gate_multiplier = torch.sigmoid((-margin + self.b)/self.tau) sigmoid_gate_multiplier = sigmoid_gate_multiplier.unsqueeze(1)# Compute final loss loss = sigmoid_gate_multiplier * (ce_loss + self.alpha * cr_loss)return loss, ce_loss, cr_loss[language=Python, caption=Python code for the Hard Margin Cringe Loss., label=python_code] class HardMarginCringeLoss(PairwiseCringeLoss): def __call__(self, x, y_w, y_l, **kwargs): # Compute the CrossEntropy loss for the positive labels ce_loss = super().__call__(x.flatten(0, 1), y_w.flatten(0, 1), **kwargs) cr_loss = CringeLoss._compute_cringe_loss(x, y_l, self.k) mask_l = y_l.ne(self.ignore_index) mask_w = y_w.ne(self.ignore_index) cr_loss *= mask_l.flatten(0, 1) ce_loss *= mask_w.flatten(0, 1)# Compute pairwise margin margin = self._get_logprob(x, y_w, mask_w) - self._get_logprob(x, y_l, mask_l) margin_based_multiplier = (margin <= self.b).long() margin_based_multiplier = margin_based_multiplier.unsqueeze(1)# Compute final loss loss = margin_based_multiplier * (ce_loss + self.alpha * cr_loss)return loss, ce_loss, cr_loss§ MODEL HYPERPARAMETERSAll the fine-tuned models are trained with a maximum of sixteen 80GB GPUs (NVIDIA A100), optimized with AdamW using β_1 = 0.9, β_2 = 0.95, ϵ = 1e-08. Models are trained up to 1000 updates with batch size up to 512. The typical fine-tuning time for a standard decoder-only transformer is less than 3 hrs. For all cringe experiments, we fix topk=5. For Binary Cringe on human preferences, the hyperparameters are α=0.005 for iteration 1, and 0.01 for iteration 2. For Hard Margin Cringe on human preferences, thehyperparameters are α=0.005, b=-10 for iteration 1 and α=0.01, b=10 for iteration 2. For Pairwise Cringe on human preferences, the Cringe hyperparameters are α=0.01, b=-10, τ=10 for iteration 1 and α=0.005, b=-10, τ=1 for iteration 2. For Pairwise Cringe on simulated preferences, the Cringe hyperparameters are α=0.005, b=0, τ=10 for iteration 1 and α=0.01, b=-10, τ=1 for iteration 2.At inference time, we use the same decoding parameters as in AlpacaFarm and sample with temp=0.7,and set the maximum number of tokens to be 300. | http://arxiv.org/abs/2312.16682v1 | {
"authors": [
"Jing Xu",
"Andrew Lee",
"Sainbayar Sukhbaatar",
"Jason Weston"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231227185309",
"title": "Some things are more CRINGE than others: Preference Optimization with the Pairwise Cringe Loss"
} |
Reshaping the ISAC Tradeoff Under OFDM Signaling: A Probabilistic Constellation Shaping Approach Zhen Du, Member, IEEE, Fan Liu, Senior Member, IEEE, Yifeng Xiong, Member, IEEE,Tony Xiao Han, Senior Member, IEEE, Yonina C. Eldar, Fellow, IEEE, and Shi Jin, Fellow, IEEE (Corresponding author: Fan Liu) An earlier version was partly presented at the IEEE Global Communications Conference (GLOBECOM), Kuala Lumpur, Malaysia, Dec 2023 <cit.>. Z. Du is with the School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China. F. Liu is with School of System Design and Intelligent Manufacturing, Southern University of Science and Technology, Shenzhen, China. Y. Xiong is with the School of Information and Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, China. T. X.-Han is with Huawei Technologies Co., Ltd, Shenzhen, China. Y. C. Eldar is with the Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot, Israel. S. Jin is with National Mobile Communications Research Laboratory, Southeast University, Nanjing, 210096, China. Received ...; accepted... ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Integrated sensing and communications is regarded as a key enabling technology in the sixth generation networks, where a unified waveform, such as orthogonal frequency division multiplexing (OFDM) signal, is adopted to facilitate both sensing and communications (S&C). However, the random communication data embedded in the OFDM signal results in severe variability in the sidelobes of its ambiguity function (AF), which leads to missed detection of weak targets and false detection of ghost targets, thereby impairing the sensing performance. Therefore, balancing between preserving communication capability (i.e., the randomness) while improving sensing performance remains a challenging task. To cope with this issue, we characterize the random AF of OFDM communication signals, and demonstrate that the AF variance is determined by the fourth-moment of the constellation amplitudes. Subsequently, we propose an optimal probabilistic constellation shaping (PCS) approach by maximizing the achievable information rate (AIR) under the fourth-moment, power and probability constraints, where the optimal input distribution may be numerically specified through a modified Blahut-Arimoto algorithm. To reduce the computational overheads, we further propose a heuristic PCS approach by actively controlling the value of the fourth-moment, without involving the communication metric in the optimization model, despite that the AIR is passively scaled with the variation of the input distribution. Numerical results show that both approaches strike a scalable performance tradeoff between S&C, where the superiority of the PCS-enabled constellations over conventional uniform constellations is also verified. Notably, the heuristic approach achieves very close performance to the optimal counterpart, at a much lower computational complexity. Integrated sensing and communications, OFDM, ambiguity function, probabilistic constellation shaping. § INTRODUCTION Wireless sensing is envisioned as a native capability of sixth generation (6G) networks, which facilitates many emerging applications requiring reliable and high-precision location-aware functionality <cit.>. This triggers the recent development of integrated sensing and communications (ISAC) technology that has received official recognition from the international telecommunication union (ITU) <cit.>, as one of the vertices supporting the 6G hexagon of usage scenarios. Indeed, ISAC enables a synergistic design of previously isolated sensing and communication (S&C) functionalities, which not only improves the utilization efficiency of both hardware and wireless resources, but may also lead to mutual performance gains between S&C <cit.>. To fully realize the promise of ISAC technologies, a unified signal capable of simultaneously accomplishing both S&C tasks is indispensable. In general, ISAC signaling strategies may be classified into three design philosophies, namely, sensing-centric <cit.> and communication-centric <cit.> designs that are built upon legacy S&C waveforms, respectively, and joint designs <cit.> aiming to conceive ISAC waveforms from the ground-up. While the joint design strategy may potentially achieve the Pareto performance boundary, it suffers from high computational complexity and lack of compatibility with existing S&C infrastructures. To that end, communication-centric designs may be a more viable and low-cost solution to implement ISAC in 6G wireless networks, where standardized orthogonal frequency division multiplexing (OFDM) waveforms can be straightforwardly adopted for target sensing. The feasibility of employing OFDM signals for target detection <cit.>, estimation <cit.>, and tracking <cit.> has been verified for radar systems in the last decade. Along this line of research, OFDM radar waveform optimization has been well-investigated for enhancing the sensing performance <cit.>. However, most existing works treat OFDM radar signals as deterministic, where random communication symbols are replaced by deterministic weights representing power allocation across subcarriers. Consequently, the sensing performance is optimized at the price of elimination of the signaling randomness, i.e., the loss of data transmission capability. Despite the well-evolved OFDM radar theory, applying the OFDM framework to ISAC systems still faces several challenges. To convey useful information, OFDM-based ISAC signals have to accommodate randomness of the communication data. In particular, each subcarrier is endowed with a communication symbol randomly drawn from certain codebooks, e.g., phase shift keying (PSK) or quadrature amplitude modulation (QAM) constellations. Highly random signals ensure high-throughput transmission, but may jeopardize target sensing. As evidenced in <cit.>, random Gaussian signals, known to be capacity-achieving for point-to-point Gaussian channels, lead to sensing performance loss due to increased Cramér-Rao bound (CRB) for parameter estimation, implying a deterministic-random tradeoff between S&C in terms of their different preference on the input distribution. In order to cope with the above issue, exploiting OFDM communication signals for sensing while preserving its randomness has received recent attention <cit.>. The basic rationale is to extract delay and Doppler parameters with the standard matched filtering approach, which may be split into four steps: 1) Compensation for Doppler phase shifts in the time domain; 2) Fast Fourier transform (FFT) across each subcarrier; 3) Compensation of the known data symbols; 4) Inverse FFT across various delays. In this context, delay and Doppler estimation is decoupled, where a target is declared to be present if a peak is detected in the corresponding delay-Doppler resolution cell when comparing with a threshold. A desired matched filter requires adherence to the constant modulus constraint on the transmit waveform, which ensures a flat spectrum and facilitates narrow mainlobes and low sidelobes. PSK-modulated OFDM signals with randomly varying phases and a fixed amplitude typically fulfill this requirement, while QAM fails to meet this criterion due to randomly varying amplitudes. Consequently, adopting QAM-modulated OFDM communication signals directly for target detection leads to compromised sensing performance. More relevant to this work, a data division approach has been proposed in <cit.>, since data symbols are perfectly known at the sensing receiver in a monostatic ISAC system. By sampling at each OFDM symbol and performing block-wise FFT, the communication symbol is mitigated by element-wise division. However, this operation may change the statistical characteristics of the noise across subcarriers and OFDM symbols, leading to performance degradation in thresholding and peak detection in the subsequent 2D-FFT processing. As a result, PSK modulation is preferable for sensing comparing to QAM, since it keeps the noise distribution unaltered, at the price of reduced communication rate. Indeed, the inherent data randomness results in fluctuated sidelobes of the ambiguity function (AF) for OFDM communication signals, both for matched filtering and data division approaches, leading to missed detection of weak targets and false detection of ghost targets. Thus, the state-of-the-art OFDM signaling schemes are generally unable to balance between S&C performance, due to the absence of sufficient degrees-of-freedom (DoFs) for controlling the signaling randomness. Inspired by the deterministic-random tradeoff in ISAC systems<cit.>, we aim to explore a novel DoF beneficial for bridging between S&C performance, i.e., the input distribution of constellation symbols, which is different from classic PSK and QAM with uniformly distributed symbols. In practice, optimizing the input distribution has been leveraged merely for communication purpose, i.e., minimizing the gap between the achievable information rate (AIR) and channel capacity, which is known as probabilistic constellation shaping (PCS) <cit.>. Nevertheless, the benefits of exploiting PCS in ISAC systems have not yet been clarified and validated. To be more specific, in this paper our aim is to preserve the communication capability (i.e., the randomness) of OFDM signaling modulated by non-uniformly distributed QAM symbols, while improving its sensing ability relative to a classic QAM counterpart. To this end, we propose a tailored PCS approach to generate random symbols with optimized input probabilities in accordance with S&C requirements. The core idea here is to build up the exact relationship between the input distribution of random symbols and S&C metrics, thereby scaling S&C behaviors with the PCS approach. For clarity, our contributions are summarized as follows: * We commence by studying the expectation and variance of AF for OFDM communication signals, where i.i.d. input symbols are drawn from a finite complex-valued alphabet known as a constellation, with arbitrary input distribution. Previous works, such as <cit.>, investigated these properties only for PSK-modulated OFDM signals. We further extend the statistical analysis to OFDM sequences. Our analysis reveals that the variance of AF is closely related to the fourth-order moment of random constellation amplitudes. * We propose to maximize the AIR constrained by the fourth-moment of constellation symbols through leveraging the PCS method. The resulting model is highly nonlinear. To tackle this challenge, we introduce a modified Blahut-Arimoto (MBA) algorithm that is proved to attain the globally optimal distribution. Simulations depict its ability to strike a controllable tradeoff between S&C performance. * To reduce the computational overheads, we further present a heuristic PCS approach by actively controlling the value of the fourth-order moment, without explicitly involving the AIR constraint. While the model omits the AIR, we show that it achieves a near-optimal S&C performance tradeoff at a significantly reduced computational complexity. This attributes to the adjustable signaling randomness arisen by optimizing the fourth-moment, thereby balancing between S&C behaviors. Numerical results verify that both PCS approaches may attain a flexible tradeoff between S&C performance in ISAC systems, in contrast to conventional uniformly distributed PSK/QAM constellations that only achieve the sensing/communication-optimal performance, i.e., two corner points on the S&C tradeoff curve. The tailored algorithms can be implemented offline, indicating their potentials of being efficiently deployed in practical scenarios. The remainder of this article is organized as follows. Section <ref> formulates the AF of OFDM signaling and its statistical characteristics. Section <ref> analyzes the communication metric, i.e., the AIR. Two PCS approaches are proposed in Section <ref>. Simulations in Section <ref> demonstrate the advantage of the approaches in striking a flexible tradeoff between S&C performance. Finally, the article is concluded in Section <ref>. Notation: Throughout the paper, 𝐀, 𝐚, and a denote a matrix, vector, and scalar, respectively. We use (·), 𝔼(·), |·|, ‖·‖, (·)^T, (·)^*, (·)^H, (·)^-1, 1_N, and 𝐈_N to denote the real part of a complex number, expectation, modulus of a complex number, Frobenius norm, transpose, conjugate, Hermitian, inverse, identity vector of size N × 1, and identity matrix of size N× N, respectively. § SENSING PERFORMANCE EVALUATION In this section, we commence with the ISAC signal model for a single OFDM symbol. Subsequently, the statistical characteristics of its AF, i.e., the expectation and variance, are derived. Then the more general case of OFDM sequences is analyzed. This lays a foundation for the proposed two PCS approaches in Section <ref>. §.§ Characterization of AF We consider a monostatic ISAC system employing OFDM signals for S&C tasks simultaneously. An OFDM symbol consisting of L subcarriers, occupying a bandwidth of B Hz and a symbol duration of T_p seconds, is given by s(t) = ∑^L-1_l=0A_l exp(jψ_l) exp(j2π l Δ ft) rect( t/T_p), where exp(j2π l Δ f t)≜ϕ_l(t) denotes the lth subcarrier, Δ f = B/L=1/T_p represents the subcarrier interval in the frequency domain, A_l and ψ_l denote the amplitude and phase of the lth i.i.d. input symbol drawn from a finite complex-valued alphabet, such as the QAM based constellation in Fig. <ref>, and rect(t) represents the rectangle window, equal to 1 for 0≤ t≤ 1, and zero otherwise. The randomness of OFDM communication signals lies in the discrete random variables A_lexp{ψ_l} mapped from the bit streams to the constellation symbols with a certain input distribution. To illustrate this in Fig. <ref>, we denote the complex random variable in an arbitrary constellation by x=A_x exp(ψ_x), and assume that there are Q discrete constellation points in the given input alphabet x∈𝒳={x_1,x_2,⋯,x_Q}, where the qth point x_q is transmitted with prior probability p_q, satisfying ∑_x p(x) = ∑^Q_q=1 p_q=1. As a special case, conventional PSK and QAM constellations are uniformly distributed, i.e., p(x) = 1/Q,∀ x. The expectation 𝔼_Y{y}=∑_x f(x)p(x) refers to the summation of Q values weighted by their discrete probabilities in the constellation, where y=f(x) represents a function of the random constellation points x. For convenience, we omit the subscript of 𝔼_Y{y} in the following. With this definition, the normalized transmit power can be expressed as 𝔼{A^2_l} = 1. Suppose that a target to be detected has delay τ_0 and Doppler shift ν_0. Then the received signal is expressed as y_r(t) = α s(t-τ_0)exp(j2πν_0t)+n_r(t),where α and n_r(t) represent the radar cross section (RCS) and input jamming, respectively. The matched filtering at the sensing receiver is thus given as Δ(τ,ν) =∫^∞_-∞ y_r(t)s^*(t-τ) exp(-j2πν t)dt =αΛ(τ-τ_0,ν-ν_0) + n'_r(t), where Λ(τ,ν) is followed from the Woodward's definition of AF <cit.>, which is interpreted as the matched filter response, expressed as a two-dimensional correlation between the transmit signal s(t) and its time-delayed/frequency-shifted counterpart, i.e., Λ(τ,ν) =∫^∞_-∞ s(t)s^*(t-τ) exp(-j2πν t)dt. The AF of a single OFDM symbol is Λ(τ,ν) = Λ_S(τ,ν) +Λ_C(τ,ν), where Λ_S(τ,ν) =T_diffsinc( -ν T_diff) exp{-j2πν T_avg}×∑^L-1_l=0 A^2_lexp{j2π lΔ f τ}, Λ_C (τ,ν) =T_diff∑^L-1_l_1=0∑^L-1_l_2=0l_2≠ l_1A_l_1A_l_2exp(j(ψ_l_1-ψ_l_2)) ×exp{j2π[((l_1-l_2)Δ f -ν) T_avg + l_2Δ f τ]} ×sinc{[(l_1-l_2)Δ f -ν] T_diff}. Here T_avg and T_diff are defined as T_avg = T_max+T_min/2 and T_diff = T_max-T_min, where T_min=max(0,τ), and T_max=min(T_p,T_p+τ). In addition, sinc(x) = sin(π x)/π x. Proof: See Appendix <ref>. ▪ Evidently, Λ_S(τ,ν) represents the superposition of L self-AF components, while Λ_C(τ,ν) contains the L(L-1) cross-AF components. Next, we aim to derive the expectations and variances of Λ_S(τ,ν) and Λ_C(τ,ν). To proceed, we first present the following assumption. In this article, we consider an “arbitrary” constellation shape following the rule that all constellation points are symmetrically distributed with respect to the center, where points in each concentric circle are with the same probabilities. For example, graphical illustration of 16-QAM based constellation format is displayed in Fig. <ref>, where p_1=⋯=p_4, p_5=⋯=p_12, p_13=⋯=p_16, and ∑^Q=16_q=1p_q=1. This rule evidently guarantees that 𝔼{ A_l exp(jψ_l)} = 0. For notational convenience, we use Λ, Λ_S, and Λ_C in the following. The expectation and variance of Λ may then be represented as 𝔼[Λ] = 𝔼[Λ_S] + 𝔼[Λ_C], and σ^2_Λ =𝔼[|Λ|^2] - |𝔼[Λ]|^2= 𝔼[|Λ_S|^2] - |𝔼[Λ_S]|^2_σ^2_S + 𝔼{|Λ_C|^2} - |𝔼{Λ_C}|^2 _σ^2_C +2{𝔼(Λ_SΛ^*_C)-𝔼(Λ_S)𝔼^*(Λ_C)}. We have 𝔼{Λ_C}=0, and 𝔼{Λ_SΛ^*_C}=0, thereby leading to 𝔼[Λ] = 𝔼[Λ_S], σ^2_Λ = σ^2_S + σ^2_C. Proof: See Appendix <ref>. ▪ §.§ Statistical Characteristics of Λ_S(τ,ν) Since 𝔼{A^2_l}=1 holds for arbitrary constellations satisfying the power constraint, we have 𝔼{Λ_S} = T_diffsinc( -ν T_diff) exp{-j2πν T_avg} ×∑^L-1_l=0exp{j2π lΔ f τ}. Evidently, the expectation holds for arbitrary input distribution of constellation symbols, which implies that it does not play a role in the subsequent PCS design. Accordingly, we mainly concentrate on the variance of Λ_S, which is σ^2_S =𝔼[|Λ_S|^2] - |𝔼[Λ_S]|^2 = T^2_diffsinc^2( - ν T_diff) ×∑^L-1_l_1=0∑^L-1_l_2=0{𝔼{A^2_l_1A^2_l_2} - 1 }exp{j2π (l_1-l_2)Δ f τ}. As special cases, for PSK with constant modulus, it is evident that σ^2_S=0 due to 𝔼{A^2_l_1A^2_l_2}=1. In contrast, the variance for uniformly/non-uniformly distributed QAM is not zero, since 𝔼{A^2_l_1 A^2_l_2} ={ 𝔼{A^4_l_1},l_1=l_2 𝔼{A^2_l_1}·𝔼{A^2_l_2}=1,l_1≠ l_2. . Thanks to (<ref>), we may further simplify σ^2_S as σ^2_S =T^2_diffsinc^2( - ν T_diff)∑^L-1_l=0(𝔼{A^4_l} - 1). The above result implies that the variances of Doppler sidelobes (ν≠ 0) are very small. Therefore, the major impact of randomness is closely related to the zero-Doppler slice known as the autocorrelation function, leading to σ^2_S = LT^2_diff(𝔼{A^4_x} - 1 ),when ν=0. Both (<ref>) and (<ref>) indicate that the variance of AF is mainly determined by the fourth moment of the constellation's amplitudes. More precisely, the variance may be reduced by minimizing the fourth moment 𝔼{A^4_x}. Note that the variance is always non-negative by definition, which demonstrates that the fourth moment of the constellation amplitudes is always greater than or equal to the square of transmit power. Hence, we have the following corollary. When ν=0, σ^2_S= 0 holds only when all constellation symbols have the constant modulus. Proof: Owing to ∑^Q_q=1p_qA^2_q=1 and ∑^Q_q=1p_q=1, we have 𝔼{A^4_x}=𝔼{A^4_x}∑^Q_q=1p_q = ∑^Q_q=1p_qA^4_q ∑^Q_q=1p_q ≥( ∑^Q_q=1√(p_q) A^2_q √(p_q))^2 = ( ∑^Q_q=1p_qA^2_q )^2 = 1. The equal sign holds when √(p_q) A^2_q/√(p_q)=1, i.e. A^2_q=1, for ∀ q, leading to the unit modulus of all constellation points, i.e., a “pseudo” PSK modulations, as the real PSK has unit modulus and equally spaced phases simultaneously. ▪ §.§ Statistical Characteristics of Λ_C To derive the variance of Λ_C, we only need to compute 𝔼{|Λ_C|^2}, which is formulated in (<ref>) at the top of the next page. For further simplifications, one may rely on 𝔼{A_l_1A_l_2A_l'_1A_l'_2exp(j(ψ_l_1-ψ_l_2-ψ_l'_1+ψ_l'_2)) } = {𝔼{ A^2_l_1A^2_l'_1}, l_1=l_2,l'_1=l'_2𝔼{ A^2_l_1A^2_l_2}, l_1=l'_1,l_2=l'_2 0, otherwise. . Note however that Λ_C is defined when l_2≠ l_1 and l'_2≠ l'_1 in (<ref>). As a consequence, recalling (<ref>) suggests that all the non-zero components of 𝔼{|Λ_C|^2} are contributed by the constraints of l_1=l'_1 and l_2=l'_2, yielding σ^2_C = 𝔼{|Λ_C|^2 }- |𝔼[Λ_C]|^2= T^2_diff∑^L-1_l_1=0∑^L-1_l_2=0,l_2≠ l_1𝔼{A^2_l_1A^2_l_2}×sinc^2 { 2π[(l_1-l_2)Δ f -ν] T_diff}. In addition, the condition l_2≠ l_1 results in independent random variables A^2_l_1 and A^2_l_2. Then exploiting (<ref>) yields σ^2_C = T^2_diff∑^L-1_l_1=0∑^L-1_l_2=0,l_2≠ l_1sinc^2 { 2π[(l_1-l_2)Δ f -ν] T_diff}. Evidently, when ν = 0, σ^2_C can be approximately omitted owing to sinc(2π(l_1-l_2)Δ f)≈ 0 for l_1≠ l_2. In contrast, σ^2_C may be relatively large when ν≠ 0. Nevertheless, regardless of the value of σ^2_C, big or small, it is fixed for arbitrary input distribution of constellation symbols, imposing no effect on the subsequent PCS design. According to (<ref>) and (<ref>), we denote variances of PSK and QAM as σ^2_S,PSK, σ^2_S,QAM, σ^2_C,PSK and σ^2_C,QAM. Then it is evident that σ^2_S,QAM > σ^2_S,PSK = 0, σ^2_C,PSK = σ^2_C,QAM. This again reflects the superiority of PSK modulation over its QAM counterpart in terms of radar sensing, due to the fact that a smaller variance of AF may effectively lead to improved sensing performance, as the fluctuations in the sidelobes incurred by random data are suppressed. Nevertheless, since QAM generally achieves a better communication performance than that of PSK, one may tradeoff between the AIR and sensing performance of QAM by devising σ^2_S with respect to the input distribution. This paves the way to a PCS approach to balance between S&C. §.§ AF of OFDM Sequences Next, we extend our previous analysis to OFDM sequences with multiple symbols. In contrast to the AF of a conventional radar pulse train, which is relatively trivial since the same waveform is applied for all pulses, the similar extension for OFDM communication signals becomes more challenging owing to the sequence randomness in multiple symbols. With the accumulation of N symbols in the time domain, a train of OFDM sequences is expressed as s̃(t) =∑^N-1_n=0∑^L-1_l=0A_n,lexp(jψ_n,l) ×exp(j2π l Δ f(t-nT_p)) rect( (t-nT_p)/T_p), where exp(j2π l Δ f(t-nT_p))≜ϕ_l(t-nT_p). According to the definition in <cit.>, the AF of s̃(t) is formulated as Λ̃(τ,ν) = ∫^∞_-∞s̃(t)s̃^*(t-τ) exp(-j2πν t)dt= ∑^N-1_n_1=0∑^N-1_n_2=0∑^L-1_l_1=0∑^L-1_l_2=0 A_n_1,l_1A_n_2,l_2exp(j(ψ_n_1,l_1-ψ_n_2,l_2)) ×∫^∞_-∞ϕ_l_1(t')ϕ^*_l_2(t'+n_1T_p-n_2T_p-τ) ×exp(-j2πν (t'+n_1T_p)) dt', where t'= t-n_1T_p. For notational convenience, we denote T̃_min =max(0,τ+(n_2-n_1)T_p),T̃_max =min(T_p,T_p+τ+(n_2-n_1)T_p),T̃_avg =(T̃_max+T̃_min)/2, T̃_diff = T̃_max-T̃_min. Then the integral in (<ref>) can be computed as in (<ref>) at the top of this page. In the same manner, we consider the self and cross components of AF, expressed as Λ̃ = Λ̃_S +Λ̃_C, where Λ̃_S is contributed by the n_1=n_2 and l_1=l_2 terms, i.e., Λ̃_S=T_diffsinc( -ν T_diff) exp{-j2πν T_avg}∑^N-1_n=0∑^L-1_l=0 A^2_n,l×exp{j2π( lΔ f τ - nν T_p)}, and Λ̃_C is contributed by all remaining components, which is formulated in (<ref>). Note that in (<ref>) T_diff and T_avg are adopted rather than T̃_diff and T̃_avg, due to n_1=n_2. Similar to the case of Λ_S and Λ_C, the statistical characteristics of Λ̃_S and Λ̃_C are summarized below. If the same constellation modulation is adopted for all OFDM symbols, the variance of Λ̃_S, denoted as σ̃^2_S, is N times the accumulation of that of a single symbol, i.e., σ̃^2_S = N σ^2_S. Proof: See Appendix <ref>. ▪ The variance of Λ̃_C, denoted as σ̃^2_C, does not affect the constellation shaping, since it is fixed for arbitrary input distribution of constellation symbols, i.e., σ̃^2_C = constant,∀ p_q, q∈{1,2,⋯,Q}. Proof: See Appendix <ref>. ▪ Since the value of σ̃^2_C (or equivalently, σ^2_C) is constant for arbitrary input distribution of constellation symbols, we are only able to control σ̃^2_S (or equivalently, σ^2_S), by optimizing the input distribution through PCS. In this manner, the randomness of OFDM signaling can be scaled, thereby affecting the S&C performance simultaneously. § COMMUNICATION PERFORMANCE EVALUATION We adopt the AIR in an additive white Gaussian noise (AWGN) channel as a communication performance indicator. The received communication signal is expressed in a simple linear form as y_c(t)=s(t)+n_c(t). After the discretization, the discrete signal in the time domain is recast in the matrix-vector form as𝐲_c=𝐃^H𝐱+𝐧_c, where 𝐲_c=[y_c(0),⋯,y_c(L-1)]^T, 𝐱=[A_0e^jψ_0,⋯,A_L-1e^jψ_L-1]^T,𝐧_c=[n_c(0),⋯,n_c(L-1)]^T, and 𝐃 is the discrete Fourier transform (DFT) matrix. By transforming 𝐲_c into the frequency domain, we have 𝐲 = 𝐱 + 𝐧, where 𝐲=𝐃𝐲_c, and 𝐧=𝐃𝐧_c. Here, 𝐧∼𝒞𝒩(0,σ^2𝐈_L). In point-to-point channels, the AIR of L parallel subchannels is characterized by the input-output mutual information (MI), which is <cit.> I(𝐱;𝐲) = ∑^L-1_l=0 I(x_l;y_l), where x_l and y_l represents the lth subchannel elements of 𝐱 and 𝐲, respectively, and I(x_l;y_l) denotes the MI in the lth subchannel. It is not surprising to see that I(x_l;y_l) are equal for all subchannels, due to the fact that the same constellation codebook is adopted at all subcarriers. This suggests that we only need to analyze the MI of a single subchannel, since the overall AIR may be computed as a summation of the rates over L sub-channels, namely, I(𝐱;𝐲) = L I(x;y), where the subscripts l is omitted for notational convenience. The MI I(x,y) is derived as I(x;y) = 𝔼_X,Y{logp(y|x)/p(y)} = H_Y(y)-H_Y|X(y|x), where H_Y|X(y|x) = log(π eσ^2),and H_Y(y) = - ∑_y[ log∑_x' p(y|x') p(x') ] ∑_x p(y|x)p(x) denote the conditional entropy of y given x, and the entropy of y, respectively. Despite that constellation points are discrete, the received signal y is a continuous random variable. For the sake of technical convenience, y is discretized for the follow-up processing. As a result, we use discrete sum rather than continuous integral in calculating H_Y(y).Since p(y) = ∑_x p(y|x)p(x) is the sum of Gaussian probability density functions (PDFs) p(y|x) weighted by the input distribution p(x), the closed-form of H_Y(y) is generally non-obtainable due to Gaussian mixture (GM) distributed y <cit.>. To observe the AIR performance, we therefore approximately compute H(y) using Monte Carlo numerical integrals: H_Y(y) = -𝔼_Y [ log∑_xp(y|x) p(x) ]≈-1/N_MC∑^N_MC_m=1log∑_xp(y_m|x) p(x), where N_MC represents the number of Monte Carlo trials, y_m denotes the mth observation and its conditional PDF p(y_m|x) is with standard Gaussian forms in the kth trial, for each x in the given constellation. By doing so, the entropy H_Y(y) can be accurately approximated when N_MC is sufficiently large, thereby obtaining the accurate value of AIR. § PCS APPROACHES FOR ISAC §.§ Optimal PCS Optimization Modeling On the basis of S&C metrics, i.e., the fourth-moment of constellations and the AIR, both of which are determined by the input distribution, we hereby present an optimal PCS model for ISAC that maximizes the MI, under the sensing, power and probability constraints: (𝒫1) {max_𝐩I(𝐩)≜ I(x;y) s.t.C_1: ∑_xp(x)A^4_x = c_0, C_2: ∑_xp(x)A^2_x = 1C_3: ∑_xp(x) = 1, C_4: p(x) ≥ 0,∀ x ∈𝒳. where 𝐩 = [p_1, p_2,..., p_Q]^T represents the discrete input probability distribution vector of a given constellation. Here, c_0 is a desired value of the fourth-moment of constellation amplitudes, which can be scaled by optimizing 𝐩, in order to control the variation of the sidelobes of AF for sensing. By recalling (<ref>), the MI in 𝒫1 is recast as 𝔼_X,Y {logp(y|x)/p(y)} = ∑_x∑_y p(x)p(y|x)logp(y|x)/p(y) = max_𝐪 ∑_x∑_y p(x)p(y|x)logq(x|y)/p(x)_ℱ(𝐩,𝐪), where 𝐪 denotes a transition matrix from 𝒴 to 𝒳 <cit.>, with 𝒳 and 𝒴 representing input and output alphabets of x and y, respectively. We further reformulate 𝒫1 as (𝒫2) {max_𝐩 max_𝐪 ℱ(𝐩,𝐪), s.t.C_1∼ C_4. . The model 𝒫2 is a convex optimization problem, since it aims to maximize an objective function jointly concave in 𝐩 and 𝐪, under linear constraints C_1-C_4. It is worth highlighting that the classic MI maximization problem can be readily solved by the standard Blahut-Arimoto algorithm <cit.> if the sensing constraint is not imposed. Nevertheless, by incorporating C_1 and C_2, 𝒫2 becomes more challenging, and the conventional methods are not straightforwardly applicable. To that end, we elaborate on an MBA algorithm tailored for 𝒫2 by relying on alternating optimization between 𝐩 and 𝐪. As evidenced in <cit.>, given the input distribution 𝐩, the optimal 𝐪 that maximizes the MI is expressed as q(x|y) = p(x)p(y|x)/∑_x'p(x')p(y|x'), which can be proved by applying the divergence inequality. In order to solve for 𝐩 that maximizes the objective under a given 𝐪 (i.e., maximizing the MI under constraints C_1∼ C_4 for obtaining the AIR, namely, the constrained capacity), we now resort to the Lagrange multiplier to seek for the optimal 𝐩 by temporarily ignoring the constraint C_4, and thus obtain ℒ =∑_x ∑_y p(x)p(y|x)logq(x|y)/p(x) - λ_1 (∑_x p(x)A^4_x-c_0)- λ_2 (∑_x p(x)A^2_x-1) - λ_3 (∑_x p(x)-1). For the sake of convenience, the natural logarithm is assumed. Differentiating with respect to p(x) yields ∂ℒ/∂ p(x) =∑_y p(y|x) log q(x|y) - log p(x) - λ_1 A^4_x - λ_2 A^2_x - λ_3 -1. By setting ∂ℒ/∂ p(x) = 0, we have p(x) =exp{∑_y p(y|x) log q(x|y) - λ_1 A^4_x - λ_2 A^2_x }×exp{ - λ_3 -1}. Recalling C_3, we further simplify the result of p(x) as follows: p(x) =exp{∑_y p(y|x)log q(x|y) -λ_1A^4_x -λ_2A^2_x }/∑_xexp{∑_yp(y|x)log q(x|y) -λ_1A^4_x -λ_2A^2_x}. It is immediately observed that p(x) satisfies C_4 due to the fact that exponentials are positive. Given λ_1 and λ_2, we may now define the kth iterative probabilities q^(k)(x|y) and p^(k+1)(x) for k≥ 0 as q^(k)(x|y) =p^(k)(x)p(y|x)/∑_x'p^(k)(x')p(y|x'), p^(k+1)(x) =e^∑_y p(y|x)log q^(k)(x|y) -λ_1A^4_x -λ_2A^2_x /∑_xe^∑_yp(y|x)log q^(k)(x|y) -λ_1A^4_x -λ_2A^2_x.Then we have ℱ(𝐩^(k+1),𝐪^(k+1))=max_𝐩 ℱ(𝐩,𝐪^(k+1)) ≥ℱ(𝐩^(k),𝐪^(k+1)) =max_𝐪 ℱ(𝐩^(k),𝐪) ≥ℱ(𝐩^(k),𝐪^(k)), which implies that the alternating optimization converges and admits a globally optimal solution, since the concave objective function of 𝒫_2 is bounded from the above by the entropy of the constellation. We refer readers to <cit.> for a detailed convergence proof of the BA algorithm. Substituting (<ref>) into C_1 and C_2 yields f_1(λ_1,λ_2) = ∑_x (A^2_x-1) g(λ_1,λ_2) = 0, f_2(λ_1,λ_2) = ∑_x (A^4_x-c_0) g(λ_1,λ_2) = 0, where g(λ_1,λ_2) = exp{∑_y p(y|x)log q^(k)(x|y) -λ_1A^4_x -λ_2A^2_x }. Note however that the discrete integral in (<ref>) is defined with respect to the GM random variable Y. Therefore, it should also be computed with Monte Carlo integration, expressed as ∑_y p (y|x) log q^(k)(x|y) = ∑_y p(y|x)/p(y)log q^(k)(x|y) p(y)= ∑_y p(y|x)/∑_x p(y|x)p(x)log q^(k)(x|y) p(y) ≈1/N_MC∑^N_MC_m p(y_m|x)/∑_x p(y_m|x)p^(k)(x)log q^(k)(x|y_m), where p(y_m|x) = exp(-|y_m-x|^2/σ^2)/(πσ^2) for x∈𝒳. To proceed, the next step involves determining Lagrange multipliers λ_1 and λ_2 in (<ref>). While a brute-force 2D grid search may be effective in achieving this goal, it leads to an unbearable computational burden. Towards that aim, we develop a tailored Newton's method for this problem. It is of significant importance to highlight the sensibility of Newton's method on the choice of initial values. In order to acquire a reliable initial value that balances between accuracy and computational complexity, we employ a relatively coarse 2D grid search for the initialization of the Newton's method, i.e., < λ^(0)_1,λ^(0)_2 > = min_<λ_1,λ_2>‖[f_1( λ_1,λ_2 ); f_2( λ_1,λ_2 ) ] ‖. The solver is completed when the objective converges to an extremely small positive value, thus attaining a relatively accurate initialization. Then the Lagrange multiplier vector λ=[λ_1,λ_2]^T may be updated with Newton's method, where the ℓth iteration ofλ is updated as <cit.> λ^(ℓ+1) = λ^(ℓ) - [ 𝐅'(λ^(ℓ)) ]^-1𝐅(λ^(ℓ)). In (<ref>), 𝐅(λ^(ℓ)) = [f_1(λ^(ℓ)_1,λ^(ℓ)_2),f_2(λ^(ℓ)_1,λ^(ℓ)_2)]^T, and 𝐅'(λ^(ℓ)) is the corresponding Jacobian matrix. The two iterative processes above are terminated when the difference between adjacent two iterations is lower than the convergence tolerance ε, e.g., ε=10^-5. For clarity, the main steps of MBA algorithm for solving the optimal PCS approach are summarized in the Algorithm 1. §.§ Heuristic PCS: A Low Complexity Approach One may observe that solving 𝒫2 requires quantizing the continuous random variable y, in an effort to numerically evaluate (<ref>) in every iteration by Monte Carlo integration, which incurs significant computational overheads. To reduce the complexity of the PCS design, we propose a heuristic approach without explicitly involving y, which is (𝒫3) {min_𝐩|∑_xp(x)A^4_x - c_0 |^2 s.t.C_2, C_3, C_4. . Again, c_0 is a preset value aiming to control the fourth moment of the constellation, thereby adjusting the variance of AF. We use the term “heuristic” on account of the absence of AIR in 𝒫3, despite that the communication performance may still be passively scaled, due to the variation of prior probabilities of the constellation through controlling its fourth moment. Intuitively, since the minimal fourth moment 𝔼{A^4_x} = 1 is attainable for PSK, by varying c_0 from 1 to the value of a uniform QAM constellation, one may also expect that the communication performance can be scaled from the PSK's AIR to that of QAM. Indeed, such a tradeoff will be observed in our simulation results in Sec. <ref>. Note that 𝒫3 is a convex quadratic optimization problem with linear constraints, which can be readily solved by various well-established algorithms. A more strict formulation would be to directly enforce the equality constraint C_1 instead of adopting the least-squares objective function in 𝒫3, in which case the constraints C_1, C_2 and C_3 constitute a linear equation system with respect to the non-negative vector 𝐩. As demonstrated earlier, the probabilities of constellation points within each concentric circle are equal. Therefore, the computational dimensionality of this linear equation system may be significantly reduced. In this spirit, C_1, C_2 and C_3 can be reconfigured into a compact form, i.e., ( [A^4_1A^4_2⋯ A^4_W,;A^2_1A^2_2⋯ A^2_W,;11⋯1 ]) _𝐀_W( [ p̅_1; p̅_2;⋮; p̅_W ]) = ( [ c_0; 1; 1 ]), ∀p̅_w ≥ 0, where W denotes the number of concentric circles, and p̅_w represents the identical probability in the wth circle. Given a 16-QAM constellation, the presence of three concentric circles (W=3) simplifies the probability calculation, as it only necessitates three probability values. Furthermore, given that the matrix 𝐀_W is now full-rank, (<ref>) admits a unique solution while satisfying constraint C_4. Importantly, it is noteworthy that this solution is independent of the AIR, implying that solving both 𝒫2 and 𝒫3 yields identical results for the 16-QAM constellation. In contrast, for a 64-QAM (W=9) constellation, (<ref>) becomes under-determined. In this case, there may exist multiple feasible solutions, indicating that the solution of 𝒫3 is merely one among those of 𝒫2. Importantly, this solution does not necessarily represent the optimal one that maximizes the AIR, underscoring that 𝒫2 is indeed an optimal PCS model compared to 𝒫3. This also highlights the need of specialized algorithms for higher-order QAM modulation schemes, i.e., seeking for a solution of the linear equation system (<ref>) via the convex programming (<ref>). Finally, we highlight that both 𝒫2 and 𝒫3 may be implemented in an offline manner, where the one-to-one mapping between each 𝐩 and each value of c_0 may be computed and stored in a look-up table for practical use. § SIMULATIONS We consider OFDM signals with normalized bandwidth and L=64 subcarriers.Conventional radar AF performance is assessed by evaluating 10log_10|Λ(τ,ν)|^2 due to the deterministic Λ(τ,ν). While for random OFDM signaling, all the AFs are evaluated in accordance with their average AF performance, i.e., Λ̅(τ,ν) = 10log_101/N_MC∑^N_MC_m=1|Λ_m(τ,ν)|^2, where N_MC=5000 trials are used, and Λ_m(τ,ν) represents the mth realization of the random AF. The average AFs are normalized as well. Recalling (<ref>), (<ref>) and (<ref>), we observe that Λ̅(τ,ν) ≈10log_10𝔼{|Λ|^2} =10log_10(σ^2_Λ + |𝔼[Λ]|^2) = 10log_10(σ^2_S + σ^2_C + |𝔼[Λ_S]|^2). Then it is evident that the sidelobes of AF are only determined by the input distribution vector 𝐩 through σ^2_S, since both σ^2_C and 𝔼[Λ_S] are irrelevant to 𝐩. §.§ AF Performance Before presenting our PCS approaches with non-uniformly distributed constellations, we first evaluate the AF of OFDM communication signals for uniformly distributed 64-PSK and 64-QAM modulations. Notably, Fig. <ref> unveils the presence of three peaks in the average AF of both 64-PSK and 64-QAM modulated signals, a phenomenon arising from the utilization of normalized axes with respect to T_p and B, respectively. Therefore, two additional peaks appear on account of the Doppler ambiguity. As exhibited in Fig. <ref>, it is evident that 64-PSK possesses significantly better average AF performance compared to 64-QAM, in terms of zero-Doppler slice (i.e., the autocorrelation function). The maximum gap of nearly 14 dB may be attributed to lower sidelobes of 64-PSK in zero-Doppler slice. Additionally, Fig. <ref> and Fig. <ref> depict the zero Doppler slices for a single realization of 64-PSK and 64-QAM modulated OFDM signal, respectively. The results demonstrate that both modulations result in random fluctuations in the AF, while 64-PSK is again superior to 64-QAM in terms of the lower random sidelobes. Next, in Fig. <ref>, we present analytical results of Λ̅_S and Λ̅_C, alongside the simulated autocorrelation functions of 64-QAM and 64-PSK, respectively. The sensing performance gap between 64-QAM and 64-PSK stems solely from Λ̅_S. Consequently, the statistical characteristics of Λ̅_C bear no influence on the PCS method. §.§ ISAC Performance with Heuristic PCS Approach We next evaluate the effectiveness of the heuristic PCS approach in the context of QAM constellations (referred to as “16/64-QAM-PCS”). As shown in Fig. <ref>(b) and (e), when c_0=1, the optimization model seeks for the solution of optimal sensing performance, in terms of the smallest fourth-moment of constellation amplitudes, i.e., 𝔼{A^4_x} - 1 =0. For 16-QAM, PCS yields an output characterized by unit modulus points, constituting a pseudo 8-PSK. It is worth emphasizing that this constellation is not a real 8-PSK since the phase differences between adjacent points are not equal. On the other hand, in the case of 64-QAM, the PCS algorithm cannot find a constant modulus circle with unit power, thereby outputs two constant modulus circles proximate to the unit modulus circle, which corresponds to the fourth-moment value of 1.0363. In addition, when c_0 increases, the PCS outputs the constellation with non-uniform input distribution, which balances between the best sensing (c_0=1) and the best communication (i.e., uniformly distributed QAM) performance. To assess the system-level sensing performance, and to illustrate the benefits of reducing sidelobes, we further consider a use case of detecting weak targets in the presence of strong self-interference (SI), which is applicable for practical ISAC scenarios operating in the in-band full duplex (IBFD) mode for short-range target detection <cit.>. To recover the weak target, the smallest of constant false alarm probability (SO-CFAR) detector <cit.> is exploited, since the power of SI and noise elsewhere is not uniform. Thanks to such adaptive signal processing, the SI can be effectively excluded from the computation process of detection threshold. Throughout 5000 Monte Carlo trials, the probability of false alarm (P_fa) is fixed as 10^-4, and the weak target is allocated within the 8th range cell, which is close to the SI at the 0th range cell. The sensing signal-to-noise ratio (SNR) is defined as the power ratio between the weak target and the noise, while the power ratio between the SI and the noise is fixed as 10 dB. Fig. <ref> illustrates the performance of SO-CFAR for random realizations of 64-QAM and 64-PSK, demonstrating that 64-PSK is more beneficial for weak target detection due to the lower random sidelobes of SI, while 64-QAM results in a higher threshold on the top of the weak target. Moreover, the detection performance of 64-QAM-PCS falls in between those of 64-QAM and 64-PSK. Nevertheless, the results of single realizations are not sufficiently convincing. To evaluate the average performance with N_MC Monte Carlo trials, the probability of detection (P_d) versus the sensing SNR is portrayed in Fig. <ref> and Fig. <ref>, which shows again that the proposed PCS approach achieves a performance tradeoff between 64-QAM and 64-PSK on an average sense. This indicates that the practical sensing performance (i.e., the P_d) may be flexibly adjusted by controlling the value of c_0. To evaluate the communication performance, we compute the AIR in an AWGN channel with Monte Carlo integrals in (<ref>). In Fig. <ref> and Fig. <ref>, the noise power σ^2 controls the receive communication SNR. For a high SNR case (σ^2=0.01), it is observed that the AIR reaches to the maximum values 4bps/Hz in c_0=1.32 for 16-QAM-PCS and 6 bps/Hz in c_0=1.38 for 64-QAM-PCS, corresponding to the entropy of the uniformly distributed 16-QAM and 64-QAM, namely, ∑^Q_q=1 A^4_q ·1_Q/Q = {[1.32, 16-QAM,Q=16,;1.3805, 64-QAM,Q=64.;]. When c_0=1, the high-SNR AIR is 3bps/Hz, achieved by the pseudo 8-PSK constellation as shown in Fig. <ref>(b), indicating that the optimal sensing performance at this moment is attained at the price of 1bps/Hz rate loss. Moreover, there is an obvious tradeoff between S&C in the region of c_0 ∈ [1,1.32], with known probability distributions on this curve, which is consistent with our analysis in Sec. IV-B. Notably, when c_0>1.32, the PCS is not uniform again, results in a declining AIR. In contrast, with c_0>1.62, the fourth moment of amplitudes reaches to its largest value and thus the AIR keeps constant as well. Fig. <ref> and Fig. <ref> further reveal the advantages of the proposed PCS method across various SNR values. As anticipated in Fig. <ref>, both 16-QAM and 16-PSK reach to their capacity limit of 4bps/Hz, respectively, when the communication SNR is sufficiently high. However, in the relatively low SNR region, namely, for SNR = 10 dB in Fig. <ref>, a noticeable gap becomes apparent between 16-QAM and 16-PSK. Thanks to the heuristic PCS customization, one may achieve a transmission rate gain at the expense of sensing performance loss when comparing with its PSK counterparts, striking a scalable tradeoff between S&C. Similar results for 64-QAM-PCS are also portrayed in Fig. <ref>. §.§ ISAC Performance with Optimal PCS Approach We now evaluate the behavior of the proposed MBA algorithm for the optimal PCS optimization model, and compare its performance with the heuristic PCS approach. As demonstrated previously, the tradeoff regime between S&C lies in: {[ 16-QAM:c_0∈[1,1.32]; 64-QAM: c_0∈[1.0363,1.3805]; ]. Therefore, we may only concentrate on these segments in the following. Notably, optimal PCS and heuristic PCS approaches have the same and unique solution for 16-QAM based constellation, which has been clarified in Sec. <ref>. That is why we consider only the 64-QAM case below. In Fig. <ref>, the two proposed PCS approaches are compared, in terms of their S&C performance with varying c_0. On one hand, the optimal PCS approach slightly outperforms heuristic PCS counterpart in achieving higher AIR. This confirms that the heuristic PCS without involving communication metric may already approaches the optimal performance bound (i.e., the optimal PCS results) at a significantly reduced complexity. On the other hand, these two approaches have almost identical P_d due to the same fourth-moment of constellation amplitudes. We may conclude that optimal PCS achieves the Pareto frontier between S&C performance, while the low-complexity heuristic PCS yields near-optimal solutions. Finally, the tradeoff regions for optimal PCS and heuristic PCS approaches are explicitly characterized in Fig. <ref>, demonstrating that optimal PCS in general achieves the better performance over heuristic PCS, despite the relatively limited performance gain. The performance of conventional PSK and QAM with uniform constellation points are also depicted for comparison. Notably, the P_d of PSK is larger over PCS results since PCS approaches for 64-QAM constellation cannot achieve a constant modulus shaping. However, the AIR of PSK is inferior to PCS enabled 64-QAM constellations, which coincides with the observation in Fig. <ref>. Overall, the resulting S&C performance boundaries confirm that both PCS approaches outperform the time-sharing strategy <cit.>, corresponding to a line segment connected the sensing-optimal point to the communication-optimal point. Moreover, the optimal constellation probabilities on the performance boundaries are readily obtainable as well through the proposed PCS optimization approach. Different from the uniform distribution, both PCS approaches may scale the constellation probability distribution by controlling c_0, thereby affecting both S&C performance simultaneously. For higher-order QAM modulations or different noise levels, a look-up table of constellation probabilities may be conceived in an offline manner, indicating that the proposed ISAC signaling strategy may be configured in advance as per the practical service requirements. § CONCLUSION This article explored the tradeoff between S&C using OFDM communication signals in monostatic ISAC systems. We first investigated the statistical characteristics of AF for OFDM signaling modulated by random symbols with arbitrary input distribution. Then, an optimal PCS approach was devised by maximizing the AIR, subject to the fourth-moment of constellation symbols, power, and probability constraints, which was numerically solved by a tailed MBA algorithm. Furthermore, a heuristic PCS approach was proposed by omitting the AIR metric, in an effort to actively control the fourth-moment of the constellation, and passively scale the communication performance by adjusting the input distribution. The main characteristics of our approaches lie in: 1) In contrast to the conventional uniform QAM modulation, the enhanced sensing performance owing to lower sidelobes of matched filtering, is attained at the expense of a certain AIR loss. 2) In contrast to the conventional uniform PSK modulation, the AIR in the low communication SNR region can be improved at the expense of certain sensing performance loss. Therefore, our approaches may strike a scalable performance tradeoff between S&C. In addition, the heuristic PCS approach achieved very close performance to the S&C tradeoff boundary achieved by the optimal PCS approach, while at a much lower computational complexity. Finally, both PCS approaches may be implemented offline in practical 6G ISAC applications. § PROOF OF PROPOSITION 1 Substituting s(t) into (<ref>) yields Λ(τ,ν) =∑^L-1_l_1=0∑^L-1_l_2=0 A_l_1A_l_2exp(j(ψ_l_1-ψ_l_2)) ×∫^∞_-∞ϕ_l_1(t)ϕ^*_l_2(t-τ) exp(-j2πν t) dt. The integral in (<ref>) may be further recast as ∫^∞_-∞ ϕ_l_1(t) ϕ^*_l_2(t-τ) exp(-j2πν t) dt = exp(j2π l_2Δ f τ) ×∫^T_max_T_minexp{j2π[(l_1-l_2)Δ f -ν] t} dt. To proceed, we rely on the following equation: ∫^T_max_T_minexp(j2π ft)dt = T_diffsinc(f T_diff) exp(j2π f T_avg). Then it is straightforward to reformulate (<ref>) as ∫^∞_-∞ϕ_l_1(t)ϕ^*_l_2(t-τ) exp(-j2πν t)dt= T_diffsinc{[(l_1-l_2)Δ f -ν] T_diff}×exp{j2π[((l_1-l_2)Δ f -ν) T_avg + l_2Δ f τ]} . Inserting (<ref>) into (<ref>) proves Proposition 1. § PROOF OF PROPOSITION 2 The derivation of 𝔼(Λ_SΛ^*_C) is shown as: 𝔼{Λ_SΛ^*_C} =K_0∑^L-1_l=0∑^L-1_l_1=0∑^L-1_l_2=0l_2≠ l_1𝔼{ A^2_l A_l_1A_l_2×exp(j(ψ_l_1-ψ_l_2)) }K_l,l_1,l_2, where K_0 and K_l,l_1,l_2 are constant with respect to the input distribution of constellation. Then we have 𝔼(Λ_SΛ^*_C) = 0, due to the fact that 𝔼{ A^2_lA_l_1A_l_2exp(-j(ψ_l_1-ψ_l_2))} ={[ 𝔼{A^3_l_1e^-jψ_l_1}𝔼{A_l_2e^jψ_l_2}=0, l=l_1≠ l_2; 𝔼{A^3_l_2e^jψ_l_2}𝔼{A_l_1e^-jψ_l_1}=0, l=l_2≠ l_1; 𝔼{A^2_l}𝔼{A_l_1e^jψ_l_1}𝔼{A_l_2e^-jψ_l_2}=0,l≠ l_1≠ l_2 ]. where the derivation exploits the rule that 𝔼{ABC}=𝔼{A}𝔼{B}𝔼{C}, if A, B and C are independent. For example, when l_1≠ l_2 we have 𝔼{A_l_1A_l_2e^-j(ψ_l_1-ψ_l_2)} = 𝔼{A_l_1e^-jψ_l_1}𝔼{A_l_2e^jψ_l_2} since A_l_1e^-jψ_l_1 and A_l_2e^jψ_l_2 are generated from i.i.d. input symbols in the constellation. Moreover, all three cases equal to zero due to that 𝔼{A_lexp(jψ_l) } = 0, which has been shown in Assumption 1. Benefiting from this, it is straightforward to see that 𝔼{Λ_C(τ,ν)}=0. Proposition 2 is thus proved. § PROOF OF PROPOSITION 3 Following a similar derivation procedure of σ^2_S, we formulate the variance of Λ̃_S as σ̃^2_S = 𝔼[|Λ̃_S|^2] - |𝔼[Λ̃_S]|^2 = T^2_diffsinc^2( - ν T_diff) ∑^N-1_n=0∑^L-1_l=0∑^N-1_n'=0∑^L-1_l'=0{𝔼{A^2_n,lA^2_n',l'} - 1}×exp{j2π[(l-l')Δ f τ - (n-n')ν T_p ]} In accordance with 𝔼{A^2_n,l A^2_n',l'}= {[ 𝔼{A^4_n,l}, n=n',l=l'; 𝔼{A^2_n,l}𝔼{A^2_n',l'}=1, Otherwise ]. we may further simplify σ̃^2_S(τ,ν) as σ̃^2_S= T^2_diffsinc^2( - ν T_diff) ∑^N-1_n=0∑^L-1_l=0(𝔼{A^4_n,l} - 1). Compared with (<ref>), it is evident that σ̃^2_S(τ,ν) is the variance accumulation of N symbols. Therefore, Proposition 3 is proved. Just for discussion, if the constellation varies with the symbol and the subcarrier, namely, the subscripts n and l, then more DoFs can be exploited to scale the signaling randomness. This can be left as our future work. § PROOF OF PROPOSITION 4 Likewise, we derive the variance of Λ̃_C as σ̃^2_C =𝔼{|Λ̃_C|^2} - |𝔼{Λ̃_C}|^2. Firstly,𝔼{Λ̃_C} = 0 can be readily verified. We therefore only need to derive𝔼{|Λ̃_C|^2}. For brevity, we denote the two terms in (<ref>) by Z_1 and Z_2, and subsequently reformulate (<ref>) as σ̃^2_C = 𝔼{|Z_1|^2}+𝔼{|Z_2|^2} -2{𝔼{Z_1Z^*_2}}, where 𝔼{|Z_1|^2} =∑^N-1_n=0∑^L-1_l_1=0∑^L-1_l_2=0l_2≠ l_1∑^N-1_n'=0∑^L-1_l'_1=0∑^L-1_l'_2=0l'_2≠ l'_1𝔼{ A_n,l_1 A_n,l_2×A_n',l'_1A_n',l'_2 e^j(ψ_n,l_1-ψ_n,l_2-ψ_n',l'_1 +ψ_n',l'_2)}× R_1(n,l_1,l_2)R^*_1(n',l'_1,l'_2), 𝔼{|Z_2|^2} =∑^N-1_n_1=0∑^N-1_n_2=0n_2 ≠ n_1∑^L-1_l_1=0∑^L-1_l_2=0∑^N-1_n'_1=0∑^N-1_n'_2=0n'_2 ≠ n'_1∑^L-1_l'_1=0∑^L-1_l'_2=0𝔼{A_n_1,l_1 A_n_2,l_2A_n'_1,l'_1 A_n'_2,l'_2e^j(ψ_n_1,l_1-ψ_n_2,l_2 -ψ_n'_1,l'_1 +ψ_n'_2,l'_2)}× R_2(n_1,n_2,l_1,l_2) R^*_2(n'_1,n'_2,l'_1,l'_2), 𝔼{Z_1Z^*_2} =∑^N-1_n=0∑^L-1_l_1=0∑^L-1_l_2=0l_2≠ l_1∑^N-1_n'_1=0∑^N-1_n'_2=0n'_2 ≠ n'_1∑^L-1_l'_1=0∑^L-1_l'_2=0𝔼{ A_n,l_1 A_n,l_2×A_n'_1,l'_1 A_n'_2,l'_2 e^j(ψ_n,l_1-ψ_n,l_2 - ψ_n'_1,l'_1 + ψ_n'_2,l'_2)}× R_1(n,l_1,l_2)R^*_2(n'_1,n'_2,l'_1,l'_2). For further simplifications, we note the following facts that 𝔼{A_n,l_1A_n,l_2A_n',l'_1A_n',l'_2 ×exp(j(ψ_n,l_1-ψ_n,l_2-ψ_n',l'_1+ψ_n',l'_2)) }= { [ 𝔼{ A^2_n,l_1A^2_n',l'_1 },l_1=l_2,l'_1=l'_2; 𝔼{ A^2_n,l_1A^2_n,l_2 }, l_1=l'_1,l_2=l'_2,n=n'; 0,otherwise ] . 𝔼{A_n_1,l_1A_n_2,l_2A_n'_1,l'_1A_n'_2,l'_2 ×exp(j(ψ_n_1,l_1-ψ_n_2,l_2-ψ_n'_1,l'_1+ψ_n'_2,l'_2)) }={ [𝔼{ A^2_n_1,l_1A^2_n'_1,l'_1 }, l_1=l_2,l'_1=l'_2,n_1=n_2,n'_1=n'_2;𝔼{ A^2_n_1,l_1A^2_n_2,l_2 }, l_1=l'_1,l_2=l'_2,n_1=n'_1,n_2=n'_2;0, otherwise ] . 𝔼{A_n,l_1A_n,l_2A_n'_1,l'_1A_n'_2,l'_2 ×exp(j(ψ_n,l_1-ψ_n,l_2-ψ_n'_1,l'_1+ψ_n'_2,l'_2)) }= { [𝔼{ A^2_n,l_1A^2_n',l'_1 }, l_1=l_2,l'_1=l'_2;𝔼{ A^2_n,l_1A^2_n,l_2 }, l_1=l'_1,l_2=l'_2,n=n'_1=n'_2;0, otherwise ] . Therefore, the expectations can be significantly simplified as 𝔼{|Z_1|^2}= ∑^N-1_n=0∑^L-1_l_1=0∑^L-1_l_2=0l_2≠ l_1 |R_1(n,l_1,l_2)|^2,𝔼{|Z_2|^2}= ∑^N-1_n_1=0∑^N-1_n_2=0n_2 ≠ n_1∑^L-1_l_1=0∑^L-1_l_2=0|R_2(n_1,n_2,l_1,l_2) |^2, 𝔼{Z_1Z^*_2}= 0. Finally, one can obtain σ̃^2_C= ∑^N-1_n=0∑^L-1_l_1=0∑^L-1_l_2=0l_2≠ l_1 |R_1(n,l_1,l_2)|^2 + ∑^N-1_n_1=0∑^N-1_n_2=0n_2 ≠ n_1∑^L-1_l_1=0∑^L-1_l_2=0|R_2(n_1,n_2,l_1,l_2) |^2. Similarly, this result suggests that σ̃^2_C is a fixed constant for different constellations. Therefore, Proposition 4 is proved. IEEEtran | http://arxiv.org/abs/2312.15941v1 | {
"authors": [
"Zhen Du",
"Fan Liu",
"Yifeng Xiong",
"Tony Xiao Han",
"Yonina C. Eldar",
"Shi Jin"
],
"categories": [
"eess.SP"
],
"primary_category": "eess.SP",
"published": "20231226080945",
"title": "Reshaping the ISAC Tradeoff Under OFDM Signaling: A Probabilistic Constellation Shaping Approach"
} |
2.0 Cone Holography with Neumann Boundary Conditions and Brane-localized Gauge Fields5mm10mmZheng-Quan Cui[Email: ], Yu Guo[Email: ], and Rong-Xin Miao[Email: ]5mm School of Physics and Astronomy, Sun Yat-Sen University,Zhuhai 519082, China10mmJanuary 14, 2024 10mmAbstract Cone holography is a codimension-n doubly holographic model, which can be interpreted as the holographic dual of edge modes on defects. The initial model of cone holography is based on mixed boundary conditions. This paper formulates cone holography with Neumann boundary conditions, where the brane-localized gauge fields play an essential role. Firstly, we illustrate the main ideas in an AdS_4/CFT_1 toy model. We show that the U(1) gauge field on the end-of-the-world brane can make the typical solution consistent with Neumann boundary conditions. Then, we generalize the discussions to general codimension-n cone holography by employing brane-localized p-form gauge fields. We also investigate perturbative solutions and prove the mass spectrum of Kaluza-Klein gravitons is non-negative. Furthermore, we prove that cone holography obeys holographic c-theorem. Finally, inspired by the recently proposed chiral model in AdS/BCFT, we construct another type of cone holography with Neumann boundary conditions by applying massive vector (Proca) fields on the end-of-the-world brane.5mmarabic1.2§ INTRODUCTIONThe AdS/CFT correspondence <cit.> provides a fruitful framework for understanding the nature of gravity and strongly coupled gauge theories, which has been used in many branches of physics, e.g., condensed matter physics, quantum chromodynamics, hydrodynamics, cosmology, black hole information paradox, etc. Since a real physical system generally has boundaries, a natural generalization of the AdS/CFT correspondence is considering the holographic dual of boundary conformal field theory (BCFT), namely the AdS/BCFT correspondence <cit.> [See also <cit.> for AdS/BCFT with various new boundary conditions.]. AdS/BCFT proposes that a BCFT on the AdS boundary is dual to the gravity coupled with an end-of-the-world (EOW) brane in bulk. It is a powerful tool to explore boundary effects such as the Casimir effect <cit.> and anomalous transports <cit.>. It is closely related to the brane world holography <cit.> and the so-called doubly holography, which plays a vital role in the recent breakthrough of black hole information paradox by the island prescription <cit.>. The island surface terminates on an EOW brane and enables a novel phase transition in recovering the Page curve of Hawking radiations. In recent few years, the island prescription has been explored from many aspects <cit.>. As a generalization of AdS/CFT and doubly holography, a novel codimension-2 holography called wedge holography was proposed in Ref. <cit.>. It is suggested that the classical gravity in a wedge bulk is dual to the quantum gravity on the EOW brane and is dual to a conformal field theory on the corner of the wedge. See Fig. <ref> (left) for the geometry. Wedge holography can be obtained as a special limit of AdS/BCFT; see Fig. <ref> (right). Wedge holography has passed non-trivial tests from entanglement/Rényi entropy, two-point functions of the energy-momentum tensor, and holographic g-theorem <cit.>. Furthermore, it is proved to be equivalent to AdS/CFT for one novel class of solutions <cit.>. Unlike the usual doubly holography, wedge holography includes a massless graviton on the EOW branes <cit.>. Remarkably, the effective action on the branes is ghost-free higher derivative gravity plus a CFT [Since the effective theory on the brane is obtained from Einstein gravity in bulk, it is natural to be ghost-free. The higher derivative gravity generally suffers the ghost problem. The CFTs play an important role in removing the ghost of higher derivative gravity. ], or equivalently, ghost-free dRGT-type multi-gravity <cit.>. See <cit.> for recent developments in wedge holography. Inspired by wedge holography, a codimension-n holography named cone holography <cit.> was proposed. See Fig. <ref> (left) for the geometry of cone holography. It is conjectured that the classical gravity in (d+1)-dimensional conical bulk is dual to the CFT on a p-dimensional defect. Similar to wedge holography, cone holography can be derived as a particular limit of AdS/dCFT <cit.>. See Fig. <ref> (right) for the schematic. In the zero-volume limit M̂→ 0, the bulk modes of dCFT disappear, and only the edge modes on defect D survive. In this sense, cone holography can be regarded as a holographic dual of edge modes on the defect. Cone holography yields the expected Weyl anomaly, entanglement/Rényi entropy, two-point functions, and so on <cit.>. It also contains a massless graviton on the brane <cit.>.Due to the technical difficulties, the initial model of cone holography <cit.> is mainly based on mixed boundary condition (MBC), where one imposes Neumann boundary condition (NBC) on the AdS_p+1 sector while choosing Dirichlet boundary condition (DBC) on the S^q sector for the EOW brane Q. The present paper aims to explore cone holography with full NBC on the EOW brane. The NBC is closely related to the junction condition of branes and, thus, is a natural boundary condition. In general, there are two possible ways to achieve this goal. One is choosing suitable embedding of the EOW brane, which is the way adopted in Ref. <cit.>. However, the embedding function depends on the angle, and it is generally challenging to get analytical solutions. The other method is taking into account the brane-localized field fields. This is the method applied in this paper, which had been used in Ref. <cit.>. We observe that the obstruction to NBC is due to the asymmetry of the S^q and AdS_p+1 sectors in the EOW brane Q=S^q×AdS_p+1. Considering the brane-localized p-form fields, one can compensate for the asymmetry to satisfy NBC. The brane-localized gravity, such as Dvali-Gabadadze-Porrati (DGP) gravity <cit.>, can do the same job. However, it requires a negative DGP gravity and thus suffers the ghost problem <cit.>. On the other hand, the brane-localized p-form field takes the standard form and is ghost-free. Besides, we analyze the perturbative solution and verify that the mass spectrum of Kaluza-Klein (KK) gravitons is non-negative, which strongly supports our model. Furthermore, we prove that our model obeys the holographic c-theorem, which is another support to our results. Finally, inspired by the chiral theory in AdS/BCFT <cit.>, we realize NBC for cone holography with a codimension-2 tensive brane E by applying brane-localized massive vector fields on the EOW brane Q. This paper is organized as follows. In section <ref>, we briefly review cone holography and the problem with NBC. We find negative DGP gravity enables NBC but suffers the ghost problem. In section <ref>, we study a toy model of AdS_4/CFT_1. We find the brane-localized U(1) gauge field can accomplish NBC for cone holography. Section <ref> generalizes the toy model to arbitrary dimensions and achieves NBC by brane-localized p-form fields. Section <ref> is devoted to analyzing the tensor perturbation and mass spectrum. Section <ref> discusses the c-theorem of cone holography. In section <ref>, we construct one class of cone holography with NBC by brane-localized massive vector fields. Finally, we summarize the results and discuss some open problems in section <ref>.§ REVIEW OF CONE HOLOGRAPHYThis section briefly reviews cone holography and the problem of NBC. Ultimately, we show negative DGP gravity can achieve NBC but has the ghost problem. We leave the resolution to this problem in the following sections.§.§ Cone holography Let us recall the main ideas of cone holography. We start with AdS/dCFT shown in Fig. <ref> (right). Here, dCFT denotes a CFT coupled with a codimension-1 defect P (boundary) and a codimension-(q+1) defect D on the AdS boundary M̂. The boundary P and defect D are dual to an EOW brane Q and a codimension-(q+1) brane E in bulk C, respectively. AdS/dCFT claims that a dCFT coupled with two defects D and P on the AdS boundary M̂ is dual to the gravity coupled with two branes E and Q in bulk C. By taking the zero-volume limit M̂→ 0, we obtain cone holography as shown in Fig. <ref> (left). Since bulk modes in M̂ disappear and only the edge modes on defects P→ D survive in such limit, cone holography can be regarded as a holographic dual of edge modes on defects. In cone holography, as shown in Fig. <ref> (left), it is proposed that the classical gravity in conical bulk C is dual to “quantum gravity” on an EOW brane Q together with a codimension-(q+1) brane E, and is dual to CFT on the defect D=∂ Q=∂ E.The notations of cone holography are summarized in Tab. <ref>.Note that the geometry of cone holography is generally non-factorizable. There are coordinate mixing between different sectors ℝ^1, S^q, AdS_p+1. See (<ref>) for an example. However, we still label the geometry by the direct product ℝ^1× S^q×AdS_p+1 to emphasize the topology. There are three perspectives describing cone holography <cit.>.* Perspective 1: A classical gravity in the conical bulk C. Here C is a (d+1)-dimensional asymptotically AdS spacetime with the typical metricds_C^2=g_ABdx^Adx^B=dr^2+sinh^2(r)dΩ_q^2+cosh^2(r)ds_AdS_p+1^2 , where the metrics of the unit q-sphere and the (p+1)-dimensional AdS spacetime respectively aredΩ_q^2=dθ^2+sin^2(θ)dΩ_q-1^2 ,ds_AdS_p+1^2 =dz^2+σ_îĵdy^îdy^ĵ/z^2= 1/z^2(dz^2-dt^2+∑_î=1^p-1dy_î^2) . * Perspective 2: “Quantum gravity” on an EOW brane Q and codimension-(q+1) brane E=AdS_p+1. The typical EOW brane locates at r=ρ, which yields the induced metric ds_Q^2=h_αβdx^αdx^β=sinh^2(ρ) dΩ_q^2+cosh^2(ρ)ds_AdS_p+1^2 .It should be mentioned that the EOW brane Q is not an AdS brane. The codimension-(q+1) brane E locates at r=0 and the induced metric on E isds_E^2=γ_ijdx^idx^j=ds_AdS_p+1^2 . The conformal invariance of defect D requires that the brane E is an AdS brane. For convenience, we also employ the following notation ds_Q^2=sinh^2(ρ)ϑ_abdx^adx^b+cosh^2(ρ)γ_ijdx^idx^j .* Perspective 3: A CFT on the p-dimensional defect D=∂ E=∂ Q. The defect is at z=0 and the typical metric isds_D^2=σ_îĵdy^îdy^ĵ=-dt^2+∑_î=1^p-1dy_î^2. The action of cone holography in bulk takes the formI= 1/16π G_N∫_C d^d+1x√(-g)(R-2Λ)+1/8π G_N∫_Qd^dx√(-h)(K-T_Q+L_Q) -1/8π G_N∫_Ed^p+1x√(-γ) T_E,where G_N denotes the gravitational constant, R is the Ricci scalar, -2Λ=d(d-1) is the cosmological constant (we have set AdS radius L=1), d=q+p+1, K labels the extrinsic curvature, L_Q denotes possible brane-localized gravity and matter fields, T_Q and T_E are the tensions of branes Q and E, respectively. For simplicity, we set 16π G_N=1 in the following discussions. According to <cit.>, there is no well-defined thin brane limit in Einstein gravity for codimensions higher than two. As a result, we have T_E=0 for the codimension-(q+1) brane E with q≥ 2. As for the codimension-2 brane E with q=1, the tension T_E is non-trivial and characterizes the conical singularityT_E=2π(1-1/n),where 2π n is the period of angle, i.e., θ∼θ+2π n.§.§ NBC problem In the following, we discuss the NBC problem of cone holography. From the action (<ref>), we derive NBC on EOW brane QNBC: K_αβ-(K-T_Q) h_αβ=T_αβ,where T_αβ=-2/√(-h)δ (√(-h) L_Q)/δ h^αβ. We first discuss the minimal model with T_αβ=L_Q=0 for simplicity. We show below that the typical metric (<ref>) does not satisfy NBC on the EOW brane located at r=ρ. From NBC (<ref>) with T_αβ=0, we deriveK_αβ=d/d-1T_Q h_αβ.For the typical metric (<ref>) in Gauss normal coordinates, the extrinsic curvatures take simple expressions K_αβ=1/2∂ h_αβ/∂ r|_r=ρ,which yieldsS^q sector: K_ab= (ρ )h_ab,AdS_p+1 sector:K_ij=tanh (ρ )h_ij.Clearly, (<ref>) and (<ref>) cannot satisfy NBC (<ref>) at the same time unless the EOW brane locates at infinity ρ→∞. This is the NBC problem for cone holography, which comes from the asymmetry of the S^q sector and the AdS_p+1 sector on the EOW brane. There are several possible ways to solve the above problem. First, one chooses mixed boundary condition instead of NBCMBC: δ h_ab=0,K_ij-(K-T_Q) h_ij=0.That one imposes DBC on the S^q sector while imposing NBC on the AdS_p+1 sector on the EOW brane <cit.>. Second, one considers more general embedding functions r=r(θ). However, it is difficult to find analytical solutions for the case with a tensive brane E <cit.>. Besides, the configuration becomes asymmetrical, and the EOW brane Q approaches the AdS boundary M at specific angle θ, which is not the geometry of Fig. <ref> (left) <cit.>. Third, one considers more general NBC (<ref>) by adding suitable intrinsic gravity or matter fields on the EOW brane. Let us take DGP gravity with L_Q=λ R_Q as an example. The NBC (<ref>) becomesK_αβ-(K-T_Q) h_αβ=λ( R_Qh_αβ-2R_Qαβ). From the induced metric (<ref>), we derive intrinsic curvatures on the EOW brane located at r=ρ,R_Q ab=(q-1) csch^2(ρ ) h_ab,R_Q ij=-p sech^2(ρ ) h_ij.Interestingly, the above curvature asymmetry can compensate those of extrinsic curvatures (<ref>) and (<ref>), which enables the NBC (<ref>) for suitable parameters T_Q and λ. Substituting the intrinsic curvatures (<ref>) together with extrinsic curvatures (<ref>) and (<ref>) into NBC (<ref>), we derive the DGP parameterλ =1/-2 p tanh (ρ )-2 (q-1)(ρ ) <0,which is negative. We do not show the tension T_Q since it is irrelevant to our discussions. According to <cit.>, negative DGP gravity has the ghost problem. Thus, although negative DGP gravity can solve the NBC problem, it is not well-defined. The following sections will resolve the NBC problem by applying brane-localized gauge fields.In the above discussions, we mainly focus on the AdS space (<ref>) in bulk. It is no longer a solution for a tensive brane E due to the back-reaction. As discussed above, no thin brane can consistently couple with Einstein gravity for codimensions higher than two. Thus, the only non-trivial brane E is codimension-2. For a tensive codimension-2 brane E, the bulk metric is given by ds^2=dr̅^2/f(r̅)+ f(r̅) dθ^2+r̅^2 dz^2-dt^2+∑_î=1^p-1dy_î^2/z^2,withf(r̅)=r̅^2-1-r̅_h^d-2(r̅_h^2-1)/r̅^d-2 , r̅_h=1+√(d^2 n^2-2 d n^2+1)/d n ,where the codimension-2 brane E locates at r̅=r̅_h. Note that 2π n is the period of angle θ, and is related to the brane tension (<ref>). The tension-less case corresponds to n=r̅_h=1. Performing the radial coordinate transformationdr=dr̅/√(f(r̅)),one can rewrite the metric (<ref>) into the form of (<ref>). In summary, this section reviews cone holography and the NBC problem. We show that negative DGP gravity can resolve the NBC problem but suffers the ghost problem.§ A TOY MODEL WITH BRANE-LOCALIZED MAXWELL FIELDSIn this section, we study a toy model of AdS_4/CFT_1, the simplest cone holography. We show that Maxwell fields on the EOW brane can make consistent NBC.In AdS_4/CFT_1, the line element (<ref>) becomesds^2=dr^2+sinh^2(r)dθ^2+cosh^2(r)dz^2-dt^2/z^2,where the codimension-2 brane E and EOW brane Q are at r=0 and r=ρ, respectively.Considering an intrinsic U(1) gauge field A_α on the EOW brane Q. The Lagrangian is given byL_Q=-1/4F^αβF_αβ,where F=dA denotes the field strength. The Lagrangian (<ref>) yields the Maxwell equation∇_α F^αβ=0 ,and the energy-momentum tensor on the EOW braneT_αβ=F_ασ F_β^ σ- 1/4 h_αβ F_λσ F^λσ.Then NBC (<ref>) becomesK_αβ-(K-T_Q) h_αβ=F_ασ F_β^ σ- 1/4 h_αβ F_λσ F^λσ. We make the following ansatz of brane-localized Maxwell fieldsA_α=(0,0,A_t(z) ).Solving the Maxwell equation (<ref>), we getA_t(z)=c_0+c_1/z,where c_0, c_1 are integration constants. We remark that c_1 can be interpreted as the charge in the usual AdS_2/CFT_1. Substituting (<ref>) and (<ref>) into (<ref>), we obtainT_θθ=1/2 c_1^2 sech^4(ρ ) h_θθ,T_ij=-1/2 c_1^2 sech^4(ρ ) h_ij,where (i,j) denote the (z,t) components and h_θθ=sinh^2(ρ), h_zz=-h_tt=cosh^2(ρ)/z^2. Similar to the DGP gravity of section <ref>, the asymmetry of energy-momentum tensor (<ref>) can compensate for the asymmetry of extrinsic curvature (<ref>) and (<ref>) to satisfy the NBC (<ref>). One can easily check that the NBC (<ref>) can be satisfied provided the following parameters are chosen c_1^2=cosh ^2(ρ )(ρ ),T_Q=tanh (ρ )+ (ρ )-csch(2 ρ ). Some comments are in order.* First, similar to DGP gravity, U(1) gauge field can also accomplish NBC on the EOW brane Q.* Second, better than DGP gravity, the brane-localized U(1) gauge field (<ref>) has the correct sign and thus is ghost-free.* Third, one do not need to worry about the divergence of gauge field on the defect z=0 in (<ref>). Recall that the asymptotic solution of Maxwell fields takes the following form near the AdS_p+1 boundaryA(y) ∼ c_0(y) z^0+ c_1(y) z^p-2+⋯,for z∼ 0,where c_0(y) is the background field andc_1(y) denotes the charge/current density on the AdS boundary.Thus, (<ref>) takes the standard expression in AdS_2/CFT_1 and nothing goes wrong (recall E is an AdS_2 brane with p=1). Note that c_1 defined in the vector (<ref>) is the electric charge. It means we need charged branes to satisfy NBC.* Fourth, the key point to achieve NBC is to compensate the asymmetry of the S^q and AdS_p+1 sectors on the EOW brane Q. We find that p-form gauge fields can do the job for cone holography in general dimensions. § CONE HOLOGRAPHY WITH NBC IThis section extends the results of section <ref> to the cone holography with general dimensions and codimensions. As discussed above, the critical point is to compensate for the asymmetry of the two sectors of EOW brane Q by employing suitable brane-localized fields. Besides, we require the theory to be well-defined and ghost-free. We find p-form fields on the EOW brane can achieve this goal. In this section, we treat the cases of tensionless and tensive E branes in a unified way.The line element (<ref>) of cone holography can be generalized to be <cit.>ds^2=dr^2+b(r)^2ϑ_ab(x^c)dx^ax^b+a(r)^2γ_ij(x^k)dx^i dx^j,where a(r) and b(r) are warp factors determined by Einstein equations, ϑ_ab and γ_ij are metrics of the unite q-sphere and the AdS_p+1, respectively. For tensionless brane E, we have b(r)=sinh(r) and a(r)=cosh(r). While for tensive brane E, there are no analytical expressions of b(r) and a(r) generally [In principle, one can perform the coordinate transformation (<ref>) to derive b(r), a(r) from metric (<ref>) for tensive codimension-2 brane E. However, in general, the integral in the coordinate transformation (<ref>) cannot be worked out exactly.]. Fortunately, we do not need exact b(r) and a(r) for our purpose. From (<ref>), we derive the extrinsic curvatures on the EOW brane (r=ρ)K_ab = 1/2∂ h_ab/∂ r|_r=ρ=b(ρ) b'(ρ)ϑ_ab=b'(ρ)/b(ρ)h_ab ,K_ij = 1/2∂ h_ij/∂ r|_r=ρ=a(ρ) a'(ρ)γ_ij =a'(ρ)/a(ρ)h_ij .where the ' denotes the derivative with respect to ρ. The trace of extrinsic curvature K_αβ is given byK =h^abK_ab+h^ijK_ij = q b'(ρ)/b(ρ)+(p+1)a'(ρ)/a(ρ) .As mentioned in section <ref>, the asymmetry of extrinsic curvatures (<ref>) and (<ref>) is the central obstruction to NBC.Consider p-form gauge fields on the EOW brane with the Lagrangian densityL_Q= -1/2(p+1)!F^μ_1μ_2⋯μ_p+1F_μ_1μ_2⋯μ_p+1,where the p-form field strength F_μ_1μ_2⋯μ_p+1 readsF_μ_1μ_2⋯μ_p+1=(p+1)∂_[μ_1A_μ_2⋯μ_p+1] .Similar to the U(1) gauge fields, the p-form field strength (<ref>) is invariant under the gauge transformationA_μ_1μ_2⋯μ_p→ A_μ_1μ_2⋯μ_p+p∂_[μ_1Θ_μ_2⋯μ_p],where Θ_μ_2⋯μ_p is a (p-1)-form gauge function. Similarly, the p-form field satisfies the Bianchi identity∇_[μ_1F_μ_2⋯μ_p+2]=0,and the equation of motion∇_μ_1F^μ_1μ_2⋯μ_p+1=0 ,which governs the electrodynamics of the p-form field. From the Lagrangian (<ref>), we obtain the symmetric and conserved energy-momentum tensor of p-form asT_αβ= -2∂ L_Q/∂ h^αβ+h_αβL_Q= 1/p!F_α μ_1μ_2⋯μ_p F_β^ μ_1μ_2⋯μ_p- 1/2(p+1)! h_αβ F_μ_1μ_2⋯μ_p+1 F^μ_1μ_2⋯μ_p+1 .For the sake of simplicity, we choose the p-form field with only one non-zero component A_ty_1y_2⋯ y_p-1(z). After some calculations, we derive the energy-momentum tensorT_ab= 1/2b(ρ)^2(z^2/a(ρ)^2)^p+1[∂_z A_ty_1y_2⋯ y_p-1(z)]^2 ϑ_ab= 1/2(z^2/a(ρ)^2)^p+1[∂_z A_ty_1y_2⋯ y_p-1(z)]^2 h_ab, T_ij= -1/2 a(ρ)^2 (z^2/a(ρ)^2)^p+1[∂_z A_ty_1y_2⋯ y_p-1(z)]^2 γ_ij =-1/2(z^2/a(ρ)^2)^p+1[∂_z A_ty_1y_2⋯ y_p-1(z)]^2 h_ij.which takes the exact forms to compensate the asymmetry of extrinsic curvatures (<ref>) and (<ref>) in the NBC (<ref>), provided A_ty_1y_2⋯ y_p-1(z)∼ 1/z^p. Remarkably, the equation of motion (<ref>) indeed gives A_ty_1y_2⋯ y_p-1(z)∼ 1/z^p. That is one of the reasons we choose p-form fields instead of other form fields. Only the p-form field can make everything consistent. Solving the equation of motion (<ref>) and NBC (<ref>), we finally obtain A_ty_1y_2⋯ y_p-1(z)= ±a(ρ)^p/pz^pa(ρ)√(b'(ρ)/b(ρ)-a'(ρ)/a(ρ)) , T_Q= (q-1/2) b'(ρ)/b(ρ)+(p+1/2)a'(ρ)/a(ρ) .It is worth mentioning that the condition b'(ρ)/b(ρ)-a'(ρ)/a(ρ)>0 ,should be satisfied for a real p-form field. For a tensionless brane E, a(r)=cosh(r) and b(r)=sinh(r) indeed meet the constraint (<ref>). After replacing the warp factors a(ρ)=cosh(ρ) and b(ρ)=sinh(ρ), the solution becomesA_ty_1y_2⋯ y_p-1(z)= ±cosh ^p(ρ )/pz^p√( (ρ )) ,T_Q=q(ρ )+ptanh (ρ )-csch(2 ρ ) ,which agrees with the toy model with p=q=1.Recall only the codimension-2 brane E can have non-trivial tension.Comparing (<ref>) with (<ref>) for this case, we get b(r)=√(f(r̅)), a(r)=r̅ and dr=dr̅/√(f(r̅)). Then we haveb'(ρ)/b(ρ)-a'(ρ)/a(ρ)= d√(f(r̅))/dr̅-√(f(r̅))/r̅= dr̅^2-d(r̅_h^2-1) r̅_h^d-2+2/2 r̅√(f(r̅))>d (r̅_h^2-1)+2/2 r̅√(f(r̅))=r̅_h/n r̅√(f(r̅))>0,where we have usedr̅> r̅_h ,f(r̅)>0 , r̅_h=1+√(d^2 n^2-2 d n^2+1)/d n≤ 1above. Note that the above r̅ takes value on the EOW brane Q, i.e., r̅=ρ̅> r̅_h. Now we have verified the condition (<ref>) for tensive brane E too. In summary, we have successfully constructed cone holography with NBC by using p-form fields on the EOW brane. The p-form field is ghost-free and takes the standard form (<ref>). Thus, This model is well-defined.§ TENSOR PERTURBATION ANALYSESThis section investigates the linear tensor perturbations of the metric and p-form fields. For our purpose, we focus on the transverse-traceless (TT) tensor perturbations on the codim-(q+1) brane E, or equivalently, the AdS_p+1 sector of EOW brane Q. Interestingly, the tensor mode decouples from the perturbations of p-form fields in NBC. As a result, the mass spectrum of KK gravitons on E reduces to that of mixed boundary condition (MBC) <cit.>[Recall that MBC imposes NBC on the AdS_p+1 sector and DBC on the S^q sector. Since we focus on the AdS_p+1 sector and the p-form field decouples, the mass spectrum of KK gravitons on the AdS_p+1 sector are the same for MBC and NBC.]. The initial work <cit.> has analyzed the mass spectrum of KK gravitons for a tensionless brane E in the s-wave case (no angle dependences in metric perturbations). Here, we discuss the most general cases and prove that the mass spectrum of KK gravitons is non-negative, which strongly supports the cone holography with both NBC and MBC.§.§ Linear tensor perturbations The metric and the p-form field can separated into their background and perturbation parts:g̃_AB= g_AB+δ g_AB , Ã_μ_1μ_2⋯μ_p =A_μ_1μ_2⋯μ_p+δ A_μ_1μ_2⋯μ_p .We are interested in the tensor perturbation asds^2=dr^2+b(r)^2ϑ_abdx^ax^b+a(r)^2(γ_ij+H_ij)dx^i dx^j ,where the perturbation H_ij satisfies the TT gauge∇^(γ)_i H^ij=0 , γ^ijH_ij=0 ,with ∇^(γ) the covariant derivative defined by γ_ij. Recall that ϑ_ab and γ_ij denote the background metrics of S^q and AdS_p+1 with unite radiuses, respectively.Thus, we haveR^(ϑ)_ab=(q-1) ϑ_ab ,R^(γ)_ij=-p γ_ij .Here, R^(ϑ)_ab and R^(γ)_ij are constructed by the metrics ϑ_ab and γ_ij, respectively. From the above equations and Einstein equations in bulk, we obtain the equation of motion of the perturbation H_ij (see Appendix <ref>) as1/a^2(□^(γ)+2)H_ij+ 1/b^2Δ^(ϑ)H_ij+∂_r∂_r H_ij+ [(p+1)a'/a+qb'/b]∂_r H_ij =0 ,where □^(γ)=γ^ij∇^(γ)_i∇^(γ)_j and Δ^(ϑ)=ϑ^ab∇^(ϑ)_a∇^(ϑ)_b are respectively d'Alembert operators in the AdS_p+1 and the unit sphere S^q, and the ' denotes the derivative with respective to the coordinate r.Next, we perform the standard separation of variablesH_ij(r,x^a,x^i)=ψ(r)ξ(x^a)χ_ij(x^i) ,where ξ(x^a) are spherical harmonic functions obeying [Δ^(ϑ)+l(l+q-1)]ξ(x^a)=0 ,and χ_ij(x^i) satisfy the equation of motion of massive gravitons on AdS_p+1(□^(γ)+2-m^2)χ_ij(x^i)=0 .Here m denotes the graviton mass, and the l≥ 0 are non-negative integers for q>1. For q=1, l can be non-negative real numbers, and there is a conical singularity for non-integer l. Substituting Eqs. (<ref>), (<ref>), and (<ref>) into the main perturbation equation (<ref>), we get the equation of motion of ψ(r)[ ∂_r∂_r +( (p+1)a'/a+qb'/b)∂_r+m^2/a^2- l(l+q-1)/b^2]ψ(r) =0 . Above, we have discussed the constraints on metric perturbations from Einstein equations in bulk. Let us continue to study the effects of NBC (<ref>) on the EOW brane. At the linear orders, the (a,b) and (i,j) components of NBC (<ref>) read δ K_ab-(K-T_Q)δ h_ab-δ K h_ab= δ T_ab , δ K_ij-(K-T_Q)δ h_ij-δ K h_ij= δ T_ij , where T_αβ are the stress tensors (<ref>) of p-form fields. From the metric ansatz (<ref>), the TT gauge (<ref>) and the formula of extrinsic curvatures K_αβ=∂_r h_αβ/2, we deriveδ K_ab=δ K=δ h_ab=0,which together with (<ref>) yields δ T_ab=0. Recall that only F_i_1i_2⋯ i_p+1 is non-zero for the background solution, which givesδ F^2 = 2 F^i_1i_2⋯ i_p+1δ F_i_1i_2⋯ i_p+1 + (p+1) δ h^i_1j_1 F_i_1i_2⋯ i_p+1 F_j_1^ i_2⋯ i_p+1= 2 F^i_1i_2⋯ i_p+1δ F_i_1i_2⋯ i_p+1 + δ h^i_1j_1 h_i_1j_1 F^2= 2 F^i_1i_2⋯ i_p+1δ F_i_1i_2⋯ i_p+1.From (<ref>) and δ T_ab=0, we deriveδ T_ab = -1/2(p+1)! h_abδ F^2=0=-1/(p+1)! h_ab F^i_1i_2⋯ i_p+1δ F_i_1i_2⋯ i_p+1=0,which yields for p-form fieldsδ F_i_1i_2⋯ i_p+1=0.Above we have used the fact that, for p-form fields in (p+1) dimensions, the field strength F_i_1i_2⋯ i_p+1 has only one independent component. Let us continue to consider the (i,j) components of NBC (<ref>). After some calculations, we obtainδ K_ij= 1/2a^2H'_ij+aa'H_ij=1/2a^2H'_ij+a'/aδ h_ij , δ T_ij= 1/2(p+1)!F^2a^2H_ij=1/2(p+1)!F^2δ h_ij ,Taking the trace of background NBC (<ref>) for (i,j) components, we deriveh^ij( K_ij -(K-T_Q) h_ij -T_ij)= (p+1) a'/a-(p+1) (K-T_Q)-1/2 p! F^2=0.From Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), we obtain H'_ij|_Q=0 .Following <cit.>, we impose the natural boundary condition on the brane EH_ij|_Eis finite.Equivalently, from Eq. (<ref>), we haveψ'|_Q= ψ'(ρ)=0, ψ|_E= ψ(0)is finite. Interestingly, the p-form field does not affect the boundary condition of the metric perturbation H_ij on the AdS_p+1 sector. As a result, the equation of motion (<ref>) and boundary conditions (<ref>) and (<ref>) of H_ij are the same for NBC and MBC <cit.>, which produces the same mass spectrum of KK gravitons on the AdS_p+1 sector. Note that MBC fixes the metric while NBC allows the perturbations in the S^q sector. Thus, they are different boundary conditions generally. We leave the study of metric perturbations in the S^q sector to future works.§.§ Mass spectrum of KK gravitons This subsection investigates the mass spectrum of KK gravitons on the brane E (AdS_p+1 sector of EOW brane). As mentioned above, the results apply to both NBC and MBC. Recall that <cit.> has studied the mass spectrum on a tensionless brane E for the s-wave with l=0. The mass spectrum of KK gravitons is non-negative and includes a massless mode are found. Here, we discuss the most general case. For our purpose, we aim to prove that the mass spectrum of KK gravitons is real and non-negative generally, i.e., m^2≥ 0, so that cone holography with NBC/MBC is well-defined. From the equation of motion (<ref>) and boundary conditions (<ref>) and (<ref>), we can determine the mass spectrum. For more details, see <cit.> and <cit.> for the tensionless and the tensive brane E, respectively. In this paper, we are not interested in the exact value of the mass. Instead, we aim to prove m^2≥ 0, so we do not need to solve the equation of motion (<ref>) and boundary conditions (<ref>) and (<ref>). First, let us prove m^2 is real. Note that m^2 is just a parameter separating variables (<ref>). In principle, it can be complex. For the boundary conditions (<ref>) and (<ref>), the orthogonal relationship for KK gravitons reads [The orthogonal relationship for AdS/BCFT and wedge holography with q=1, p=d-1 is given by <cit.>, which can be easily generalized to the current case of cone holography. ] ∫_0^ρ b(r)^q a(r)^p-1ψ_m,l(r) ψ_m',l(r) dr= c_lδ_m,m',where l is the real non-negative angle quantum number, c_l is a constant depending on the normalization of ψ_m,l. Suppose that there are complex m^2 in the mass spectrum of KK gravitons. Since the equation of motion (<ref>) and boundary conditions (<ref>) and (<ref>) are both real, complex m^2 must appear in a complex conjugate pair. By applying the orthogonal condition (<ref>) for the complex conjugate pair, we get ∫_0^ρ b(r)^q a(r)^p-1ψ_m,l(r) ψ_m^*,l(r) dr= 0.On the other hand, we have ∫_0^ρ b(r)^q a(r)^p-1ψ_m,l(r) ψ_m^*,l(r) dr= ∫_0^ρ b(r)^q a(r)^p-1 |ψ_m,l(r)|^2 dr>0,where a(r) and b(r) are real positive functions. The contradiction between the expressions (<ref>) and (<ref>) suggests that there is no complex mass. Now, we have finished the proof that the mass spectrum of KK gravitons is real. Second, let us prove that the mass spectrum of KK gravitons is non-negative, i.e., m^2≥ 0. To do so, we construct a non-negative integral ∫_0^ρb(r)^q a(r)^p+1ψ_m,l'(r) ψ'_m,l(r) dr≥ 0.Integrating by parts and applying the equation of motion (<ref>) and boundary conditions (<ref>) and (<ref>) with b(0)=0, we derive∫_0^ρb(r)^q a(r)^p+1ψ_m,l'(r) ψ'_m,l(r) dr= ∫_0^ρb(r)^q-2 a(r)^p+1ψ^2_m,l(r) ( m^2b(r)^2/a(r)^2-l(l+q-1)) dr≥ 0.We have carefully chosen the indexes of a(r) and b(r) for (<ref>) so that there is no ψ'_m,l(r) in Eq. (<ref>).For the s-wave with l=0, one can easily derive m^2≥ 0 from Eq. (<ref>). The case with non-zero l is somewhat complicated. We notice that we always have b(r) < a(r). For the tensionless brane E, we have b(r)=sinh(r), a(r)=cosh(r), which obeys b(r) < a(r). As for the tensive case, only the brane E with codimensions two is consistent with Einstein gravity <cit.>. For the codim-2 brane E, note thatb(r)^2=f(r̅)=r̅^2-1-(r̅_h/r̅)^d-2(r̅_h^2-1) ,a(r)^2=r̅^2 ,which also satisfies b(r) < a(r). By applying b(r) < a(r), we can prove m^2≥ l(l+q-1) ≥ 0.Supposing m^2<l(l+q-1), we derive from (<ref>) a contradiction∫_0^ρ b(r)^q a(r)^p+1ψ_m,l'(r) ψ'_m,l(r) dr ≥ 0 <l(l+q-1) ∫_0^ρ b(r)^q-2 a(r)^p+1ψ^2_m,l(r) (b(r)^2/a(r)^2-1) dr< 0.As a result, the inequality (<ref>) must be held. Now, we have proved that the mass spectrum of KK gravitons is non-negative for the general cases. Some comments can be summarised as follows. * The massless KK mode appears only in the s-wave with l=0. From (<ref>), it is clear that m^2≥ l(l+q-1)>0 for l>0.* The orthogonal relationship (<ref>) of KK gravitons is not priori. It holds only for suitable boundary conditions. Fortunately, the p-form fields do not affect the boundary conditions (<ref>) and thus the orthogonal relationship (<ref>). On the other hand, the DGP gravity and brane-localized higher derivative gravity modify the boundary conditions and the corresponding orthogonal relationship <cit.>. As a result, m^2 can be negative and even complex for the negative DGP gravity and some kinds of brane-localized higher derivative gravity <cit.>.* The real and non-negative mass spectrum of KK gravitons suggests that cone holography with NBC/MBC is stable under linear tensor perturbations. In general, m^2 can be negative in AdS_p+1 as long as it obeys the Breitenlohner-Freedman (BF) bound m^2≥ -(p/2)^2 <cit.>. Since the mass spectrum of cone holography is non-negative, it behaves better than the BF bound.* The massless KK mode is normalized because the coordinate r ranges from 0 to the position of the EOW brane, and it is localized on the brane E. There are a series of massive KK modes of graviton which contribute to the gravity on brane E some corrections.§ HOLOGRAPHIC C-THEOREMIn this section, we investigate the holographic c-theorem in cone holography. Due to the doubly holographic nature of cone holography, we have two holographic approaches; one takes the null energy condition (NEC) on the EOW brane Q <cit.>, and the other one applies NEC in bulk C <cit.>. Note that the holographic c-theorem in cone holography corresponds to the holographic g-theorem in AdS/BCFT <cit.>. To start, we first demonstrate qualitatively why c-theorem applies in cone holography. The A-type central charge of CFT_p is inversely proportional to the effective gravitational constant on the AdS_p+1 brane <cit.>c ∼1/G_eff=1/G_N∫_0^ρ b(r)^q a(r)^p-1dr,which increases with ρ. Recall that r=ρ denotes the location of EOW brane Q; large ρ corresponds to UV (AdS boundary r=∞); small ρ corresponds to IR (deep bulk). Thus, we have the c-theoremc_UV≥ c_IR.§.§ Method I: NEC on EOW brane This subsection proves the holographic c-theorem from NEC on the EOW brane. For simplicity, we focus on the AdS space (<ref>) in bulk so that we can follow the approaches of <cit.>. The brane E is tensionless in the AdS space (<ref>). Performing the coordinate transformations, ẑ=z/cosh(r),x = z tanh(r) .we rewrite metric (<ref>) into the Poincaré formds^2 = dx^2 +x^2 dΩ_q^2 +dẑ^2 -dt^2+∑_î=1^p-1dy_î^2/ẑ^2, 0≤ x ≤ζ(ẑ),where branes E and Q are at x=0 and x=ζ(ẑ), respectively. For the minimal case without matter fields on the brane, we have ζ(ẑ)=sinh(ρ) ẑ. In general, ζ(ẑ) is determined by NBCK_AB-(K-T_Q)h_AB=T^matter_AB ,where T^matter_AB denote energy-momentum tensors of p-form fields and other brane-localized matter fields. Note that K_AB, h_AB and T^matter_AB are all defined on the EOW brane Q. We use the bulk coordinates A and B to follow the conventions of <cit.>. We impose NEC on the EOW brane,T^matter_AB(N_Q)^A(N_Q)^B≥ 0 ,where N_Q is the null vector on brane Q with the non-vanishing components (N_Q)^x=ζ'(ẑ)/√(1+ζ'(ẑ)^2) ,(N_Q)^ẑ=1/√(1+ζ'(ẑ)^2) ,(N_Q)^t=1 .One can check that the p-form fields indeed obey the NEC (<ref>) on the EOW brane. We do not impose NEC on the brane E, since the energy density on brane E must be a constant (T^E_ij∼γ_ij) <cit.>, yielding the trivial NEC T^E_ij (N_E)^i (N_E)^j=0 on the brane E. Form Eqs. (<ref>), (<ref>) and K_xx=ζ '(ẑ)^2 (-ẑζ”(ẑ)+ζ '(ẑ)^3+ζ '(ẑ))/ẑ^2 (ζ '(ẑ)^2+1)^5/2, K_ẑẑ=-ẑζ”(ẑ)+ζ '(ẑ)^3+ζ '(ẑ)/ẑ^2 (ζ '(ẑ)^2+1)^5/2 , K_ẑx= ζ '(ẑ) (-ẑζ”(ẑ)+ζ '(ẑ)^3+ζ '(ẑ))/ẑ^2 (ζ '(ẑ)^2+1)^5/2, K_tt= -ζ '(ẑ)/ẑ^2 √(ζ '(ẑ)^2+1),we deriveζ”(ẑ)/ẑ(ζ'(ẑ)^2+1)^3/2≤ 0,which gives ζ”(ẑ)≤ 0. Multiplying (<ref>) by a positive function ẑ(1+ζ'(ẑ)^2)^p+1/2ζ'(ẑ)^qand integrating along ẑ, we get the c-function c(ẑ)= ∫^ẑ(1+ζ'(z̅)^2)^p-2/2ζ'(z̅)^qζ”(z̅)dz̅ = ζ '(ẑ)^q+1_2F_1(1-p/2,q+1/2;q+3/2;-ζ '(ẑ)^2)/q+1,where we have assumedζ'(ẑ)>0.By construction, the c-function (<ref>) obeys the c-theoremc'(ẑ)≤ 0,where 1/ẑ denotes the energy scale.On the UV fixed point ẑ→ 0, we have ζ '(ẑ)→sinh(ρ)>0 <cit.>. As a result, the c-function (<ref>) reduces to the A-type central charge (<ref>) with a(r)=cosh(r) and b(r)=sinh(r) (up to some positive constants)lim_ẑ→ 0 c(ẑ)= sinh ^q+1(ρ ) _2F_1(1-p/2,q+1/2;q+3/2;-sinh ^2(ρ ))/q+1= ∫^ρ_0 cosh^p-1(r)sinh^q(r)dr∼ c .We finish the proof of holographic c-theorem by applying NEC on the EOW brane.§.§ Method II: NEC in bulk This subsection proves the holographic c-theorem using NEC in bulk C. We follow the approach of <cit.>. Recall the solution to vacuum Einstein gravity is given by ds^2 =dr^2 +b(r)^2dΩ_q^2 +a(r)^2ds_AdS_p+1^2.Inspired by <cit.>, we assume the solution to Einstein gravity coupled with bulk matter fields takes the formds^2 =dr^2 +b(r)^2/R(z)dΩ_q^2 +a(r)^2/R(z)ds_AdS_p+1^2where z is the coordinate of AdS_p+1 and labels the renormalization scale. At the UV fixed point z→ 0, we require lim_z→ 0R(z)=1.There is a natural speculation of the c-function c(z)= 1/G_N∫^ρ_0 ( b(r)^2/R(z))^q/2( a(r)^2/R(z))^p-1/2dr = R(z)^2-d/21/G_N∫^ρ_0 b(r)^q a(r)^p-1dr ,which reduces to the central charge (<ref>) (up to some positive constants) at the UV fixed point z→ 0, R(z) → 1. Above we have used d=q+p+1. Following the standard approach <cit.>, we impose NEC for matter fields in bulk CT_AB(N_C)^A(N_C)^B≥ 0 ,where N_C is the bulk null vector with the non-zero components (N_C)^z=(N_C)^t=1. Combing the NEC (<ref>) with Einstein equation G_AB=8π G_N T_AB, we get a restrictiond-2/2z^2 √(R(z))(z^2 R'(z)/√(R(z)))'≥ 0.Sincelim_z→ 0(z^2 R'(z)/√(R(z)))=0 ,the above inequality implies (z^2 R'(z)/√(R(z)))≥ 0 and thusR'(z)≥ 0 .Taking the derivative of (<ref>) regarding z, we find the c-function decrease along the RG flowc'(z)=-d-2/2R(z)^-d/2R'(z)1/G_N∫^ρ_0 b(r) ^q a(r)^p-1dr≤ 0 .As a result, we have c_UV≥ c_IR and prove the holographic c-theorem. To summarize, we have proved the c-theorem of cone holography by applying two holographic methods on the EOW brane and in bulk, respectively. It is a solid support for the cone holography.§ CONE HOLOGRAPHY WITH NBC IIIn this section, we take another approach to formulate cone holography with NBC. For simplicity, we focus on the most interesting case with a codim-2 brane E [ Einstein gravity can not couple with tensive thin branes with codimensions higher than two.]. Our model comprises Maxwell's fields in bulk and massive vector (Proca) fields on the EOW brane. It can also include Chern-Simons terms in bulk, not affecting our discussions. This holographic model is initially developed to derive chiral current near a boundary <cit.>. Interestingly, it can also accomplish NBC for cone holography.Our model is given by the gravitational action (<ref>) with L_Q=0 plus the following vector actionI_A=∫_Cd^d+1x√(-g)(-1/4ℱ_ABℱ^AB)+∫_Qd^dx√(-h)(-1/2m_A^2h^αβA_α A_β).Here, 𝒜 and A are the vector in bulk and induced vector on the EOW brane, ℱ=d𝒜 and F=dA are strength tensors in bulk and on brane Q, respectively. There can be a Chern-Simons term L_CS(𝒜) in bulk for odd (d+1) generally. We do not consider it since it is irrelevant to our discussions below [ We focus on the maximally symmetrical solutions in this section. As a result, the effects of Chern-Simons terms vanish.]. We add a mass term for the induced vector on the EOW brane. This mass term is important in producing the expected boundary chiral current <cit.> and realizing NBC for cone holography.One can also add an intrinsic kinetic energy term -λ/4F_αβ F^αβ on the EOW brane Q <cit.>. Since we focus on the constant induced vector on Q below, the kinetic energy term is irrelevant. From the action (<ref>), we derive NBC for the vectorn_Aℱ^AB+m_A^2 A^β h^B_β =0, where n_A is the outpointing unit normal vector on EOW brane, h^B_β=∂ X^B/∂ x^β, A_β= h^B_β𝒜_B, A^β=h^βαA_α. The energy-momentum tensor for the massive vector field on the EOW brane Q readsT_αβ= 1/2m_A^2(A_α A_β-1/2h_αβA_λ A^λ) . There is an analytical solution to the Einstein-Maxwell equations in bulk, which reads𝒜_B=(0,√(2(d-1)/d-2)Q_e/r̅^d-2,0,⋯,0) ,and ds^2=1/f(r̅)dr̅^2+f(r̅)dθ^2+r̅^2γ_ijdx^idx^j ,where Q_e is the “electric charge”, γ_ij is the AdS metric with a unit radius, and the blacking factor is given byf(r̅)=r̅^2-1-Q_e^2/r̅^2(d-2)-(r̅_h/r̅)^d-2(r̅_h^2-1-Q_e^2/r̅_h^2(d-2)) , r̅_h≤r̅≤ρ̅ .Note that we have f(r̅_h)=0, and the codimension-2 brane E and EOW brane Q locate at r̅=r̅_h and r̅=ρ̅, respectively. The solution (<ref>) resembles the charged hyperbolic black hole <cit.>. The difference between them is that the time coordinate of the black hole is replaced by the coordinate θ. As a result, the “electric charge” Q_e of <cit.> is replaced by i Q_e in(<ref>). Strictly speaking, Q_e of (<ref>) is the electric current rather than charge. From the metric (<ref>), we derive the components of extrinsic curvatureK_θθ = 1/2∂ h_θθ/∂ r|_Q= √(f(ρ̅))/2∂ h_θθ/∂ρ̅=√(f(ρ̅))/2 f'(ρ̅)=1/2f'(ρ̅)/√(f(ρ̅))h_θθ , K_ij = 1/2∂ h_ij/∂ r|_Q= √(f(ρ̅))/2∂ h_ij/∂ρ̅=√(f(ρ̅))ρ̅γ_ij =√(f(ρ̅))/ρ̅h_ij ,where we have used dr=dr̅/√(f(r̅)). The trace of the extrinsic curvature readsK =h^θθK_θθ+h^ijK_ij = 1/2f'(ρ̅)/√(f(ρ̅))+(d-1)√(f(ρ̅))/ρ̅ . From Eq. (<ref>), we read off the induced vector on the EOW brane r̅=ρ̅, A_α=(A_θ,0,⋯,0)=(√(2(d-1)/d-2)Q_e/ρ̅^d-2,0,⋯,0) .It shows that A_α is a constant vector with only non-zero component A_θ. Then the energy-momentum tensor (<ref>) becomesT_θθ=1/4m_A^2(A_θ A_θ h^θθ) h_θθ , T_ij=-1/4m_A^2(A_θ A_θ h^θθ) h_ij,where (A_θ A_θ h^θθ) =A^2_θ/f(ρ̅) is a constant. We remark that T_θθ and T_ij are taken the expected asymmetry to compensate that of extrinsic curvatures (<ref>) and (<ref>) and to obey theNBC (<ref>).Substituting the extrinsic curvatures (<ref>), (<ref>) and vector energy-momentum tensor (<ref>) into NBC (<ref>), we get two independent equations from the (θ,θ) and (i,j) components. Recall that we have two free parameters T_Q and m_A^2, which happens to solve the two independent equations. We obtainT_Q= (4 d-6) f(ρ̅)+ρ̅ f'(ρ̅)/4 ρ̅√(f(ρ̅)),m_A^2= √(f(ρ̅))(ρ̅ f'(ρ̅)-2 f(ρ̅))/ρ̅ A^2_θ,where A_θ=√(2(d-1)/d-2)Q_e/ρ̅^d-2 is a constant. Above we focus on the NBC (<ref>) with respect to gravity. The NBC (<ref>) of vectors set another constraint m_A^2=(d-2)√(f(ρ̅))/ρ̅.Comparing (<ref>) with (<ref>), we observe that the charge Q_e and brane location ρ̅ are not independent, which is similar to the case of p-form fields. We obtain Q_e^2=r̅_h^2 d-4(2/d(ρ̅/r̅_h)^d-2+r̅_h^2-1).Now we have shown that our model with massive vectors on the EOW brane can indeed enable NBC.Some comments are listed as follows. First, the mass squared (<ref>) is positive m_A^2>0, which implies this model is stable and tachyon-free. Second, there is no well-defined charge-free limit Q_e→ 0. The reason is as follows. From (<ref>) with Q_e=0, we derive the reciprocal of angle period lim_Q_e→ 01/2π n=f'(r̅_h)/4π=d (r̅_h^2-1)+2/4 πr̅_h≥ 0,which yields √(d-2/d)≤r̅_h. On the other hand, we obtain from (<ref>)lim_Q_e→ 0ρ̅=r̅_h(1/2(d-d r̅_h^2))^1/d-2≤r̅_h,for √(d-2/d)≤r̅_h. It leads to a contradiction, sincethe EOW brane Q should lie outside the brane E, i.e, ρ̅ > r̅_h. It suggests the charge is necessary for cone holography with NBC. Interestingly, this is also the case for models with brane-localized p-form fields in sections <ref> and <ref>.§ CONCLUSIONS AND DISCUSSIONSThis paper formulates cone holography with NBC by employing p-form fields on the EOW brane. NBC is closely related to the junction condition of branes and a massless graviton on branes is allowed. Most doubly holographic models are based on NBC. Thus, there is a good motivation to construct cone holography with NBC. We observe that the central obstruction for NBC is the asymmetry of the two sectors S^q and AdS_p+1 of EOW brane Q. Negative DGP gravity can resolve this problem but suffers the ghost issue. Remarkably, the brane-localized p-form fields can compensate for the asymmetry of the EOW brane and accomplish NBC. Moreover, the p-form field takes the standard form and is ghost-free. We have analyzed the perturbation solution and verified that the mass spectrum is non-negative. We also have proven the holographic c-theorem for cone holography with NBC. These are solid supports for our models. Finally, inspired by the chiral theory in AdS/BCFT, we have constructed another model of cone holography with NBC by applying a massive vector (Proca) field on the EOW brane. The mass of Proca field is positive, implying that this model is stable. Interestingly, the charge is necessary to enable NBC in both models.Let us discuss some possible directions. First, the p-form fields appear naturally in string theory. It raises the question if there is a string theory origin of cone holography. We leave this interesting problem to future work. Second, it is interesting to calculate the generalized gravitational entropy (holographic entanglement entropy) of cone holography with brane-localized p-form fields. The generalized gravitational entropy is given by the area of Ryu-Takayanagi surface in bulk plus entanglement entropy of p-form fields on the EOW brane Q. In the s-wave approximation, since there are no low-energy excitations in the S^q sector, the p-form fields live effectively in the AdS_p+1 sector. However, there is no physical degree of freedom for p-form fields in AdS_p+1. As a result, the entanglement entropy of p-form fields becomes zero in the s-wave limit. While for higher-energy excitations, p-form fields live in the full space Q, and the entanglement entropy is non-zero. Calculating this entanglement entropy is also an interesting problem. Once one obtains the generalized gravitational entropy, one can study many interesting aspects such as the entanglement island and Page curves in cone holography with NBC. § ACKNOWLEDGEMENTS We thank Dong-Qi Li for valuable discussions. Rong-Xing Miao is supported by the National Natural Science Foundation of China (No. 11905297 and No. 12275366). Zheng-Quan Cui is supported by the National Natural Science Foundation of China (No. 12247135).§ CONVENTIONSIn this Appendix we list conventions adopted in this paper. We are using the definitions R^P_MQN=∂_QΓ^P_MN-∂_NΓ^P_MQ+Γ^P_QLΓ^L_MN-Γ^P_NLΓ^L_MQ and R_MN=R^L_MLN. The metric signature is the mostly plus convention (-,+,+,⋯,+). We will use the natural units with 16π G_N=c=ħ=1, where G_N is the gravitational constant. We use the unit normalized convention for symmetrization and antisymmetrization. Parentheses ( ) which appear in indices denote symmetrization, for exampleA_(μ B_ν)=1/2(A_μ B_ν+A_ν B_μ), A_(μν)=1/2(A_μν+A_νμ).Square brackets [ ] which appear in indices denote antisymmetrization, for exampleA_[μ B_ν]=1/2(A_μ B_ν-A_ν B_μ), A_[μν]=1/2(A_μν-A_νμ).§ PERTURBATIONS OF SPACETIME QUANTITIESOn the EOW brane Q, the linear tensor perturbations which we consider isds^2=dr^2+b(r)^2(ϑ_ab+H_ab)dx^ax^b+a(r)^2(γ_ij+H_ij)dx^i dx^j .Here, these tensor perturbations satisfies TT condition∇^(ϑ)_a H^a_b=0 ,H≡ϑ^abH_ab=0 , ∇^(γ)_i H^i_j=0 ,H≡γ^ijH_ij=0 .We use ∇^(ϑ) and ∇^(γ) to signify the covariant derivative defined by ϑ_ab and γ_ij. Generally, the tensor perturbations H_ab and H_ij are the functions of all spacetime coordinates (r,x^a,x^i). Why we can consider such an ansatz is that a general perturbation H_αβ around the background metric h_αβ will destroy the configuration of cone holography and is forbad.From Eq. (<ref>), the spacetime quantities can be separated intoR̃_ab= R_ab+δ R_ab , R̃_ab= R_ab+δ R_ab , R̃=R+δ R .The background components of Ricci tensor and Ricci scalar areR_ab= R^(ϑ)_ab-bb'[(p+1)a'/a+qb'/b]ϑ_ab-b^2∂_r(b'/b)ϑ_ab ,R_ij= R^(γ)_ij-aa'[(p+1)a'/a+qb'/b]γ_ij-a^2∂_r(a'/a)γ_ij , R_rr= -(p+1)a”/a-qb”/b ,R =1/a^2R^(γ)+1/b^2R^(ϑ) -(p+1)pa'^2+2aa”/a^2-q(q-1)b'^2+2bb”/b^2 -2(p+1)qa'/ab'/b .Here the prime denotes the derivative with respect to the coordinate r, and R^(ϑ) and R^(γ) are the Ricci scalars constructed by the metric ϑ_ab and γ_ij respectively. The perturbations of Ricci tensor and Ricci scalar are calculated asδ R_ab=-1/2b^2/a^2□^(γ)H_ab-1/2Δ^(ϑ)H_ab-1/2b^2∂_r∂_rH_ab -1/2b^2[(p+1)a'/a+qb'/b]∂_r H_ab+2/q-1R^(ϑ)H_ab-bb'[(p+1)a'/a+qb'/b]H_ab-b^2∂_r(b'/b)H_ab , δ R_ij=-1/2□^(γ)H_ij-1/2a^2/b^2Δ^(ϑ)H_ij-1/2a^2∂_r∂_rH_ij -1/2a^2[(p+1)a'/a+qb'/b]∂_r H_ij+1/pR^(γ)H_ij-aa'[(p+1)a'/a+qb'/b]H_ij-a^2∂_r(a'/a)H_ij , δ R=0 .Here □^(γ)=γ^ij∇^(γ)_i∇^(γ)_j and Δ^(ϑ)=ϑ^ab∇^(ϑ)_a∇^(ϑ)_b denote d'Alembert operators in the AdS_p+1 and the unit sphere S^q, respectively. From the Einstein equationR_AB = 2/d-1Λ g_AB ,we have (a,b) and (i,j) components of background Einstein equationsR_ab = 2/d-1Λ b^2 ϑ_ab ,R_ij = 2/d-1Λ a^2 γ_ij ,and its perturbation equationsδ R_ab = 2/d-1Λ b^2 H_ab , δ R_ij = 2/d-1Λ a^2 H_ij .The concrete background Einstein equations are1/b^2 R^(ϑ) -q(q-1)b'^2+bb”/b^2-(p+1)qa'/ab'/b=q2/d-1Λ ,1/a^2 R^(γ)-(p+1)pa'^2+aa”/a^2-(p+1)qa'/ab'/b = (p+1)2/d-1Λ , -(p+1)a”/a-qb”/b= 2/d-1Λ .The concrete perturbation equations are given by1/a^2□^(γ)H_ab+ 1/b^2(Δ^(ϑ)-2/q(q-1)R^(ϑ))H_ab+∂_r∂_r H_ab+ [(p+1)a'/a+qb'/b]∂_r H_ab=0 ,1/a^2(□^(γ)-2/p(p+1)R^(γ))H_ij+ 1/b^2Δ^(ϑ)H_ij+∂_r∂_r H_ij+ [(p+1)a'/a+qb'/b]∂_r H_ij =0 . jhep 00Maldacena:1999m J.M. Maldacena, The Large N limit of superconformal field theories and supergravity, https://doi.org/10.4310/ATMP.1998.v2.n2.a1Adv. Theor. Math. Phys. 2 (1998) 231 [https://arxiv.org/abs/hep-th/9711200hep-th/9711200].Gubser:1998gkp S.S. Gubser, I.R. Klebanov and A.M. Polyakov, Gauge theory correlators from noncritical string theory, https://doi.org/10.1016/S0370-2693(98)00377-3Phys. Lett. B 428 (1998) 105 [https://arxiv.org/abs/hep-th/9802109hep-th/9802109].Witten:1998w E. Witten, Anti-de Sitter space and holography, https://doi.org/10.4310/ATMP.1998.v2.n2.a2Adv. Theor. Math. Phys. 2 (1998) 253 [https://arxiv.org/abs/hep-th/9802150hep-th/9802150].Takayanagi:2011zk T. Takayanagi, Holographic Dual of BCFT, https://doi.org/10.1103/PhysRevLett.107.101602Phys. Rev. Lett. 107 (2011) 101602 [https://arxiv.org/abs/1105.51651105.5165].Fujita:2011ftt M. Fujita, T. Takayanagi and E. Tonni, Aspects of AdS/BCFT, https://doi.org/10.1007/JHEP11(2011)043JHEP 11 (2011) 043 [https://arxiv.org/abs/1108.51521108.5152].Nozaki:2012qd M. Nozaki, T. Takayanagi and T. Ugajin, Central Charges for BCFTs and Holography, https://doi.org/10.1007/JHEP06(2012)066JHEP 06 (2012) 066 [https://arxiv.org/abs/1205.15731205.1573].Miao:2018qkc R.-X. Miao, Holographic BCFT with Dirichlet Boundary Condition, https://doi.org/10.1007/JHEP02(2019)025JHEP 02 (2019) 025 [https://arxiv.org/abs/1806.107771806.10777].Miao:2017gyt R.-X. Miao, C.-S. Chu and W.-Z. Guo, New proposal for a holographic boundary conformal field theory, https://doi.org/10.1103/PhysRevD.96.046005Phys. Rev. D 96 (2017) 046005 [https://arxiv.org/abs/1701.042751701.04275].Chu:2017aab C.-S. Chu, R.-X. Miao and W.-Z. Guo, On New Proposal for Holographic BCFT, https://doi.org/10.1007/JHEP04(2017)089JHEP 04 (2017) 089 [https://arxiv.org/abs/1701.072021701.07202].Chu:2021mvq C.-S. Chu and R.-X. Miao, Conformal boundary condition and massive gravitons in AdS/BCFT, https://doi.org/10.1007/JHEP01(2022)084JHEP 01 (2022) 084 [https://arxiv.org/abs/2110.031592110.03159].Miao:2017aba R.-X. Miao and C.-S. Chu, Universality for Shape Dependence of Casimir Effects from Weyl Anomaly, https://doi.org/10.1007/JHEP03(2018)046JHEP 03 (2018) 046 [https://arxiv.org/abs/1706.096521706.09652].Chu:2018ntx C.-S. Chu and R.-X. Miao, Anomalous Transport in Holographic Boundary Conformal Field Theories, https://doi.org/10.1007/JHEP07(2018)005JHEP 07 (2018) 005 [https://arxiv.org/abs/1804.016481804.01648].Miao:2022oas R.-X. Miao and Y.-Q. Zeng, Holographic anomalous chiral current near a boundary, https://doi.org/10.1016/j.physletb.2023.137700Phys. Lett. B 838 (2023) 137700 [https://arxiv.org/abs/2210.052032210.05203].Zhao:2023rno T.-M. Zhao and R.-X. Miao, General holographic chiral theory near a boundary, https://doi.org/10.1016/j.physletb.2023.138383Phys. Lett. B 848 (2024) 138383.Randall:1999ee L. Randall and R. Sundrum, A Large mass hierarchy from a small extra dimension, https://doi.org/10.1103/PhysRevLett.83.3370Phys. Rev. Lett. 83 (1999) 3370 [https://arxiv.org/abs/hep-ph/9905221hep-ph/9905221].Randall:1999vf L. Randall and R. Sundrum, An Alternative to compactification, https://doi.org/10.1103/PhysRevLett.83.4690Phys. Rev. Lett. 83 (1999) 4690 [https://arxiv.org/abs/hep-th/9906064hep-th/9906064].Karch:2000ct A. Karch and L. Randall, Locally localized gravity, https://doi.org/10.1088/1126-6708/2001/05/008JHEP 05 (2001) 008 [https://arxiv.org/abs/hep-th/0011156hep-th/0011156].Penington:2019npb G. Penington, Entanglement Wedge Reconstruction and the Information Paradox, https://doi.org/10.1007/JHEP09(2020)002JHEP 09 (2020) 002 [https://arxiv.org/abs/1905.082551905.08255].Almheiri:2019psf A. Almheiri, N. Engelhardt, D. Marolf and H. Maxfield, The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole, https://doi.org/10.1007/JHEP12(2019)063JHEP 12 (2019) 063 [https://arxiv.org/abs/1905.087621905.08762].Almheiri:2019hni A. Almheiri, R. Mahajan, J. Maldacena and Y. Zhao, The Page curve of Hawking radiation from semiclassical geometry, https://doi.org/10.1007/JHEP03(2020)149JHEP 03 (2020) 149 [https://arxiv.org/abs/1908.109961908.10996].Almheiri:2019yqk A. Almheiri, R. Mahajan and J. Maldacena, Islands outside the horizon,https://arxiv.org/abs/1910.110771910.11077.Almheiri:2019psy A. Almheiri, R. Mahajan and J.E. Santos, Entanglement islands in higher dimensions, https://doi.org/10.21468/SciPostPhys.9.1.001SciPost Phys. 9 (2020) 001 [https://arxiv.org/abs/1911.096661911.09666].Chen:2020uac H.Z. Chen, R.C. Myers, D. Neuenfeld, I.A. Reyes and J. Sandor, Quantum Extremal Islands Made Easy, Part I: Entanglement on the Brane, https://doi.org/10.1007/JHEP10(2020)166JHEP 10 (2020) 166 [https://arxiv.org/abs/2006.048512006.04851].Chen:2020hmv H.Z. Chen, R.C. Myers, D. Neuenfeld, I.A. Reyes and J. Sandor, Quantum Extremal Islands Made Easy, Part II: Black Holes on the Brane, https://doi.org/10.1007/JHEP12(2020)025JHEP 12 (2020) 025 [https://arxiv.org/abs/2010.000182010.00018].Ling:2020laa Y. Ling, Y. Liu and Z.-Y. Xian, Island in Charged Black Holes, https://doi.org/10.1007/JHEP03(2021)251JHEP 03 (2021) 251 [https://arxiv.org/abs/2010.000372010.00037].Krishnan:2020fer C. Krishnan, Critical Islands, https://doi.org/10.1007/JHEP01(2021)179JHEP 01 (2021) 179 [https://arxiv.org/abs/2007.065512007.06551].Yadav:2022mnv G. Yadav and A. Misra, Entanglement entropy and Page curve from the M-theory dual of thermal QCD above Tc at intermediate coupling, https://doi.org/10.1103/PhysRevD.107.106015Phys. Rev. D 107 (2023) 106015 [https://arxiv.org/abs/2207.040482207.04048].Emparan:2023dxm R. Emparan, R. Luna, R. Suzuki, M. Tomašević and B. Way, Holographic duals of evaporating black holes, https://doi.org/10.1007/JHEP05(2023)182JHEP 05 (2023) 182 [https://arxiv.org/abs/2301.025872301.02587].Kawabata:2021hac K. Kawabata, T. Nishioka, Y. Okuyama and K. Watanabe, Probing Hawking radiation through capacity of entanglement, https://doi.org/10.1007/JHEP05(2021)062JHEP 05 (2021) 062 [https://arxiv.org/abs/2102.024252102.02425].Chou:2021boq C.-J. Chou, H.B. Lao and Y. Yang, Page curve of effective Hawking radiation, https://doi.org/10.1103/PhysRevD.106.066008Phys. Rev. D 106 (2022) 066008 [https://arxiv.org/abs/2111.145512111.14551].Alishahiha:2020qza M. Alishahiha, A. Faraji Astaneh and A. Naseh, Island in the presence of higher derivative terms, https://doi.org/10.1007/JHEP02(2021)035JHEP 02 (2021) 035 [https://arxiv.org/abs/2005.087152005.08715].Hu:2022ymx Q.-L. Hu, D. Li, R.-X. Miao and Y.-Q. Zeng, AdS/BCFT and Island for curvature-squared gravity, https://doi.org/10.1007/JHEP09(2022)037JHEP 09 (2022) 037 [https://arxiv.org/abs/2202.033042202.03304].Hu:2022zgy P.-J. Hu, D.-Q. Li and R.-X. Miao, Island on codimension-two branes in AdS/dCFT, https://doi.org/10.1007/JHEP11(2022)008JHEP 11 (2022) 008 [https://arxiv.org/abs/2208.119822208.11982].Miao:2022mdx R.-X. Miao, Massless Entanglement Island in Wedge Holography, https://arxiv.org/abs/2212.076452212.07645.Miao:2023unv R.-X. Miao, Entanglement island and Page curve in wedge holography, https://doi.org/10.1007/JHEP03(2023)214JHEP 03 (2023) 214 [https://arxiv.org/abs/2301.062852301.06285].Li:2023fly D.-Q. Li and R.-X. Miao, Massless entanglement islands in cone holography, https://doi.org/10.1007/JHEP06(2023)056JHEP 06 (2023) 056 [https://arxiv.org/abs/2303.109582303.10958].Jeong:2023hrb H.-S. Jeong, K.-Y. Kim and Y.-W. Sun, Island in dyonic black holes: doubly holographic theory, https://arxiv.org/abs/2305.181222305.18122.Yu:2023whl M.-H. Yu, X.-H. Ge and C.-Y. Lu, Page curves for accelerating black holes, https://doi.org/10.1140/epjc/s10052-023-12267-3Eur. Phys. J. C 83 (2023) 1104 [https://arxiv.org/abs/2306.114072306.11407].Chang:2023gkt J.-C. Chang, S. He, Y.-X. Liu and L. Zhao, Island formula in Planck brane, https://doi.org/10.1007/JHEP11(2023)006JHEP 11 (2023) 006 [https://arxiv.org/abs/2308.036452308.03645].Tong:2023nvi C.-W. Tong, D.-H. Du and J.-R. Sun, Island of Reissner-Nordström anti-de Sitter black holes in the large d limit, https://arxiv.org/abs/2306.066822306.06682.Ghodrati:2022hbb M. Ghodrati, Encoded information of mixed correlations: the views from one dimension higher, https://doi.org/10.1007/JHEP08(2023)059JHEP 08 (2023) 059 [https://arxiv.org/abs/2209.045482209.04548].Lee:2022efh J.H. Lee, D. Neuenfeld and A. Shukla, Bounds on gravitational brane couplings and tomography in AdS_3 black hole microstates, https://doi.org/10.1007/JHEP10(2022)139JHEP 10 (2022) 139 [https://arxiv.org/abs/2206.065112206.06511].Aguilar-Gutierrez:2023tic S.E. Aguilar-Gutierrez, A.K. Patra and J.F. Pedraza, Entangled universes in dS wedge holography, https://doi.org/10.1007/JHEP10(2023)156JHEP 10 (2023) 156 [https://arxiv.org/abs/2308.056662308.05666].Guo:2023fly Y. Guo and R.-X. Miao, Page curves on codim-m and charged branes, https://doi.org/10.1140/epjc/s10052-023-12026-4Eur. Phys. J. C 83 (2023) 847.Liu:2023ggg Y. Liu, Q. Chen, Y. Ling, C. Peng, Y. Tian and Z.-Y. Xian, Entanglement of defect subregions in double holography, https://arxiv.org/abs/2312.080252312.08025.Lin:2023ajt J. Lin, Y. Lu and Q. Wen, Cutoff brane vs the Karch-Randall brane: the fluctuating case,https://arxiv.org/abs/2312.035312312.03531.Lin:2023hzs Y.-Y. Lin, J. Zhang and J.-C. Jin, Entanglement islands read perfect-tensor entanglement, https://arxiv.org/abs/2312.144862312.14486.Akal:2020wfl I. Akal, Y. Kusuki, T. Takayanagi and Z. Wei, Codimension two holography for wedges, https://doi.org/10.1103/PhysRevD.102.126007Phys. Rev. D 102 (2020) 126007 [https://arxiv.org/abs/2007.068002007.06800].Miao:2020oey R.-X. Miao, An Exact Construction of Codimension two Holography, https://doi.org/10.1007/JHEP01(2021)150JHEP 01 (2021) 150 [https://arxiv.org/abs/2009.062632009.06263].Hu:2022lxl P.-J. Hu and R.-X. Miao, Effective action, spectrum and first law of wedge holography, https://doi.org/10.1007/JHEP03(2022)145JHEP 03 (2022) 145 [https://arxiv.org/abs/2201.020142201.02014].Miao:2023mui R.-X. Miao, Ghost Problem, Spectrum Identities and Various Constraints on Brane-localized Higher Derivative Gravity, https://arxiv.org/abs/2310.162972310.16297.Ogawa:2022fhy N. Ogawa, T. Takayanagi, T. Tsuda and T. Waki, Wedge holography in flat space and celestial holography, https://doi.org/10.1103/PhysRevD.107.026001Phys. Rev. D 107 (2023) 026001 [https://arxiv.org/abs/2207.067352207.06735].Aguilar-Gutierrez:2023zoi S.E. Aguilar-Gutierrez and F. Landgren, A multiverse model in dS wedge holography,https://arxiv.org/abs/2311.020742311.02074.Miao:2021ual R.-X. Miao, Codimension-n holography for cones, https://doi.org/10.1103/PhysRevD.104.086031Phys. Rev. D 104 (2021) 086031 [https://arxiv.org/abs/2101.100312101.10031].Jensen:2013lxa K. Jensen and A. O'Bannon, Holography, Entanglement Entropy, and Conformal Field Theories with Boundaries or Defects, https://doi.org/10.1103/PhysRevD.88.106006Phys. Rev. D 88 (2013) 106006 [https://arxiv.org/abs/1309.45231309.4523].DeWolfe:2001pq O. DeWolfe, D.Z. Freedman and H. Ooguri, Holography and defect conformal field theories, https://doi.org/10.1103/PhysRevD.66.025009Phys. Rev. D 66 (2002) 025009 [https://arxiv.org/abs/hep-th/0111135hep-th/0111135].Clark:2004sb A.B. Clark, D.Z. Freedman, A. Karch and M. Schnabl, Dual of the Janus solution: An interface conformal field theory, https://doi.org/10.1103/PhysRevD.71.066003Phys. Rev. D 71 (2005) 066003 [https://arxiv.org/abs/hep-th/0407073hep-th/0407073].Dong:2016fnf X. Dong, The Gravity Dual of Renyi Entropy, https://doi.org/10.1038/ncomms12472Nature Commun. 7 (2016) 12472 [https://arxiv.org/abs/1601.067881601.06788].Kanda:2023zse H. Kanda, M. Sato, Y.-k. Suzuki, T. Takayanagi and Z. Wei, AdS/BCFT with brane-localized scalar field, https://doi.org/10.1007/JHEP03(2023)105JHEP 03 (2023) 105 [https://arxiv.org/abs/2302.038952302.03895].Dvali:2000hr G.R. Dvali, G. Gabadadze and M. Porrati, 4-D gravity on a brane in 5-D Minkowski space, https://doi.org/10.1016/S0370-2693(00)00669-9Phys. Lett. B 485 (2000) 208 [https://arxiv.org/abs/hep-th/0005016hep-th/0005016].Bostock:2003cv P. Bostock, R. Gregory, I. Navarro and J. Santiago, Einstein gravity on the codimension 2-brane?, https://doi.org/10.1103/PhysRevLett.92.221601Phys. Rev. Lett. 92 (2004) 221601 [https://arxiv.org/abs/hep-th/0311074hep-th/0311074].Breitenlohner:1982bm P. Breitenlohner and D.Z. Freedman, Positive Energy in anti-de Sitter Backgrounds and Gauged Extended Supergravity, https://doi.org/10.1016/0370-2693(82)90643-8Phys. Lett. B 115 (1982) 197.Myers:2010tj R.C. Myers and A. Sinha, Holographic c-theorems in arbitrary dimensions, https://doi.org/10.1007/JHEP01(2011)125JHEP 01 (2011) 125 [https://arxiv.org/abs/1011.58191011.5819].Kobayashi:2018lil N. Kobayashi, T. Nishioka, Y. Sato and K. Watanabe, Towards a C-theorem in defect CFT, https://doi.org/10.1007/JHEP01(2019)039JHEP 01 (2019) 039 [https://arxiv.org/abs/1810.069951810.06995].Belin:2013uta A. Belin, L.-Y. Hung, A. Maloney, S. Matsuura, R.C. Myers and T. Sierens, Holographic Charged Renyi Entropies, https://doi.org/10.1007/JHEP12(2013)059JHEP 12 (2013) 059 [https://arxiv.org/abs/1310.41801310.4180]. | http://arxiv.org/abs/2312.16463v1 | {
"authors": [
"Zheng-Quan Cui",
"Yu Guo",
"Rong-Xin Miao"
],
"categories": [
"hep-th",
"gr-qc"
],
"primary_category": "hep-th",
"published": "20231227081938",
"title": "Cone Holography with Neumann Boundary Conditions and Brane-localized Gauge Fields"
} |
1]Prosenjit [email protected] [1]Indian Institute Of Engineering Science and Technology (IIEST), Shibpur-711103, WB, India Quasinormal modes of Einstein–scalar–Gauss–Bonnet black holes [============================================================= In this paper, we investigate quasinormal modes of scalar and electromagneticfields in the background of Einstein–scalar–Gauss–Bonnet (EsGB) black holes. Usingthe scalar and electromagnetic field equations in the vicinity of the EsGB black hole,we study nature of the effective potentials. The dependence of real and imaginary parts of thefundamental quasinormal modes on parameter p (which is related to the Gauss–Bonnet coupling parameter α) for different values of multipole numbers l arestudied. We analyzed the effects of massive scalar fields on the EsGB black hole, whichtells us the existence of quasi–resonances. In the eikonal regime, we find theanalytical expression for the quasinormal frequency and show that the correspondence between the eikonal quasinormal modes and null geodesics is valid in the EsGB theory for the test fields. Finally, we study grey-bodyfactors of the electromagnetic fields for different multipole numbers l, whichdeviates from Schwarzschild's black hole. § INTRODUCTIONGeneral Relativity (GR) is the simplest theory of gravitation, which is consistent with various astrophysical observations, like gravitational waves,black holes, etc. Nevertheless, GR opens some challenging problems. These are theexistence of singularities at the center of the black holes, a complete theory of quantum gravity, the problem of dark matter/energy, cosmic inflation, and others. To solvethese problems theorists introduce alternative approaches to gravity. There are a number of such approaches to gravity available in the literature, e.g., string–theory–inspired gravities, adding higher-order terms in the curvature tensor to the Einstein–-Hilbert action of GR.One of the well-motivated alternativetheories of gravity is the EsGB theory. In thistheory the scalar field is nonminimally coupled to the Gauss–Bonnet (GB) term. The EsGB theories also arise in the low energy limit of string theory <cit.>. The lowest order correction to the Einstein-–Hilbert action is the GB term, whichis quadratic in curvature tensor, but this theory has pure divergence in 4Dspacetime. Alternatively, in recent years Glavan and Lin <cit.> have removed the divergence in 4D GB gravity by replacing the GB parameter α to α/D-4, where D is the spacetime dimensions. However, later it was shown that this naive regulariation scheme does not lead to the well-defined theory of gravity. Nevertheless, the black hole solutions <cit.> obtained as results of such regularazation proved out to be also solutions in the well-defined theories <cit.>. On the countrary, the EsGB theories, we are interested here, are free of such kind of problems, because in EsGB theories the divergence is absent due to the coupling between the GB term and scalar fields.Isolated black holes are very simple objects and can be described by the threeparameters mass, charge, and spin. However, the actual situation differs from this. A blackhole at the center of galaxies is surrounded by matter distribution, such asan accretion disk, jets and outflows, stars, etc. Therefore, a black hole interactswith its surroundings matter distribution. Even in the absence of any matterdistribution around black hole, it will interact with the vacuum andproduce a pair of particles and anti-particle, which is known as Hawkingradiation. Therefore, adynamical, interacting black hole can not be described by mass, charge, andspin only, and one needs to consider the perturbations theory of black holes. Therehas been a growing interest in the perturbation theory of black holes <cit.>.There are a number of reasons for interest in proper frequencies of such out-of-equilibrium black holes, called quasinormal modes. First of all, the LIGO and VIRGO scientificcollaborations detect gravitational-wave signals from black holes <cit.>and it is consistent with Einstein's theories of gravity, though, due to the large uncertainty in the determining of the spin and mass of the black hole, the large window for alternative theories remain <cit.>. Thedominating influence on such a signal is expected to come from thequasinormal modes exhibiting the lowest frequency, commonlyreferred as the fundamental modes. Various black hole solutions in higher-order theories of gravity, its quasinormalmodes, and Hawking radiation are studied extensively. The EsGB gravity was studied in Refs. <cit.>, where the most important black hole solutions were found numerically.The analytical black hole solution of EsGB theories using continued fraction approximation (CFA) and its shadowwere studied in Ref. <cit.>. The gravitational quasinormal modes of Einstein–dilaton–Gauss–Bonnet (EdGB) solutions are studied in Refs. <cit.>. The spontaneous scalarization, black hole sensitivities, and linear stability of the EsGBblack hole are investigated in Refs. <cit.>. Black hole solutions in EdGB, Einstein–Wely, and Einstein cubic gravity investigated in Refs. <cit.>,while its quasinormal modes and Hawking radiation are studied in Refs. <cit.>. In addition, quasinormal modes of Kaluza-Klein-like black holes in the Einstein-Gauss-Bonnet theory were considered in <cit.>. Furthermore, quasinormal modes of string-corrected d-dimensional black holes and noncommutative Schwarzschild black holes are analysed in Refs. <cit.>. The gravitational perturbations of numerically obtained EdGB black holes are studied in Ref. <cit.>. In the eikonallimit, the quasinormal modes for gravitational perturbations ofEsGB black holes were obtained in Ref. <cit.>. However, the scalar andelectromagnetic perturbations either of analytically or numerically obtained EsGB black hole solution are not analyzed, which provides us an opportunity to fill this gap. The scalar field will be studied not only in the massless limit, but also for the non-zero massive term. The latter has a number of motivations, because an effective mass term appears in the wave equation as a result of introduction of extra dimensions <cit.>, magnetic fields <cit.> After all, massive long-lived modes and oscillatory tails <cit.> may contribute into the very long gravitational waves observed recently via the Time Pulsar Array<cit.>. The paper is organized as follows. In section <ref>, we outlines thebasics of EsGB theories in four dimensions and discuss the metric functions for black holesin EsGB gravity. The quasinormal modes for massless/massive scalar fields andelectromagnetic fields are studied in section <ref> for different scalar couplingfunctions. Furthermore, we derived the analytical formula for quasinormal frequency in the eikonal regime.In section <ref>, we discuss the grey–body factors for electromagnetic fields. Finally, we summarize our results in the conclusions section.§ EINSTEIN–SCALAR–GAUSS-BONNET BLACK HOLESThe action for EsGB theories in 4D can be written as I= ∫ dx^4 √(-g)[ R/κ^2 + α f(φ) R_GB^2 - 1/2∇_μφ∇^μφ],where g isdeterminant of the metric g_ρσ, we take κ=16 π G c^-4=1, α is the GB coupling constant and R_GB^2 is defined as R_GB^2=R_μνρσ R^μνρσ -4 R_μν R^μν + R^2,and f(φ) is the arbitrary smooth function of the scalar field φ, which is known as GB coupling functional. The metric of static and spherically symmetric black holes in EsGB theory can be written in the following form <cit.>ds^2 = -g_tt dt^2 + g_rr dr^2 + r^2 (dθ^2 + sin^2θ dϕ^2 ),where the analytically approximated metric functions were found inRef. <cit.>. The functions g_tt and g_rr arewritten up to fourth order for different GB coupling functional inthe Appendix <ref>. Here we consider quadratic, cubic, quartic, inverse and Logarithmic Gauss-Bonnet coupling functional. To parameterize the family of EsGB black holes solution, we will introduce thedimension–less parameter p asp ≡96 α^2 f^'(φ)^2 /r_0^4, 0 ≤ p <1,where r_0 is the position of the event horizon, φ_0 is thescalar fields at the event horizon. In the limit p=0, one can obtainSchwarzschild black hole. § QUASINORMAL MODES OF TEST FIELDSIn this paper, we consider quasinormal modes of test fields in the backgroundof EsGB black holes. The equation for scalar, and electromagnetic in thebackground of the EsGB black hole is given by 1/√(-g)∂_μ( √(-g) g^μν∂_νΦ) -m^2 Φ =0,1/√(-g)∂_μ( √(-g) g^σμ g^ρν F_ρσ)=0, where F_ρσ = ∂_ρ A_σ - ∂_σ A_ρ and A_μ is the vector potential. To do the separation of variables we will introduce the radial function R_ω l(r) and spherical harmonics Y_l(θ, ϕ) as Φ(t,r,θ,ϕ)= e^± i ω t R_ω l(r) Y_l(θ, ϕ),where l is the angular number. After the separation of variables equation (<ref>) can be written in the following Schrodinger-like wave equation <cit.>d^2 Ψ/dr_*^2 + (ω^2 - V_i(r)) Ψ=0, where r_* is the “tortoise coordinate", defined asdr_*= dr √(g_rr/g_tt).The effective potentials of scalar (i = s) and electromagnetic (i = e) fields in the background of EsGB black hole are given by V_s = g_rr g_tt^' - g_tt g_rr^'/2r g_rr^2 + g_ttl(l+1) /r^2 +g_tt m,V_e =g_ttl(l+1) /r^2.The dependence of effective potential on various parameters (e.g. p and l) for massless/massive scalar fields and electromagnetic fields are shown. To express the radial coordinates in units of event horizon radius, we set r_0=1. The form of effective potential is positive definite and itdiminishes at both the event horizon and infinity. For massless scalar andelectromagnetic fields as we increase the parameter p height ofthe potential barrier decreases and the barrier height is maximum atp=0. It can be seen that as we increase the angular number lheight of the potential barrier becomes higher. For massive scalar fields, the height of the potential barrier increases with mass and it is minimumfor m=0. The master equation (<ref>) can be solved using WKB approximation <cit.>. The wave function Ψ satisfies the following boundary conditionΨ∼e^± i ω r_*,at r_*→±∞, i.e. there are only incoming waves at the event horizon and purely outgoing waves at spatial infinity. In the asymptotic region, using the boundary condition one can solve themaster equation using the WKB approximations up to the required order. Expanding the potential into the Taylor series and using the master equation, the solution of differential equation (<ref>) is obtainednear the peak of the potential. Matching these two solutions thequasinormal frequency ω is derived. Finally, the frequency can be expressedas ω = Re(ω) + i Im(ω), where Re(ω) represent real oscillationfrequency and Im(ω) represent damping rate. The quasinormal frequency upto 6-th order WKB method has the following formi (ω^2 - V_0)/√(-2 V_0^'') - ∑_i=2^i=6Λ_i =n+1/2, where Λ_i are the correction term up to 6-th order <cit.>. n=0, 1, 2 is the overtone number. To compute the quasinormal modesof massless scalar and vector fields we will use WKB Padé approximation <cit.>.It is worth of mentioning that special attention will be devoted to the l≫ 1, eikonal, regime, first of all because there is a correspondence between the eikonal quasinormal modes and some parameters of the null geodesics suggested in<cit.> and further constrained in<cit.> . The correspondence is expected to be broken for gravitational perturbations of the Einstein-dilaton-Gauss-Bonnet theory <cit.>.Here we will check whether this correspondence is fulfilled for the test fields in the EsGB theory. §.§ Quadratic GB-Coupling Functional: TEXT The quasinormal modes of massless scalar (s=0) and electromagnetic (s=1)fields for l=1, 2 and l=1, 2, 3 are shown in Figs. <ref>-<ref>. From Figs. <ref>-<ref> one can be seen that the behaviour of real and imaginary parts of the quasinormal frequency as a function of parameter p is almostlinear for different values of l. Therefore, we use in-buildMathematica function FindFormula, to express real andimaginary parts of the quasinormal frequency as a function of parameter p for different values of l. In formula <ref> and <ref> the parameterp runs from p=0.0 to p=0.8. The approximate linear laws of scalar andelectromagnetic fields for different values of l are given byRe(ω_s=0, l=1)≈ 0.586146 -0.0192317 p,Im(ω_s=0, l=1)≈ 0.0114263 p-0.194999,Re(ω_s=0, l=2)≈ 0.967948 -0.0299163 p,Im(ω_s=0, l=2)≈ 0.0108174 p - 0.193500.Re(ω_s=1, l=1)≈ 0.498029 -0.0176951 p,Im(ω_s=1, l=1)≈ 0.00835913 p-0.184913,Re(ω_s=1, l=2)≈ 0.916163 -0.0281436 p,Im(ω_s=1, l=2)≈ 0.00940289 p-0.189931,Re(ω_s=1, l=3)≈ 1.31512 -0.0405207 p,Im(ω_s=1, l=3)≈ 0.0103759 p-0.191264. The behaviour of real and imaginary parts of the quasinormal modes for massive scalarfields as a function of mass (m) is shown in Fig. <ref>. The WKB formula<cit.> can not describe the quasi resonances accurately <cit.>, but in the eikonalregime WKB method is exact, providing sufficiently accurate results at l ≥ n. Numerous examples of usage of the WKBmethod (see, for instance <cit.> say that,the sixth-order WKB formula with Padé approximants is usually the best for the computation of the quasinormal frequency of massive scalar fields. From Fig. <ref>(b) we can see that the imaginary part of the quasinormal frequency approacheszero, which indicates the existence of arbitrarily long-lived frequencies, called quasi-resonances. §.§.§ Analytical Formula For QNMS in the Eikonal Regime In the domain of high multipole numbers l (eikonal), test fields withvarying spin follows a common law up to the leading order. In this context,we examine an electromagnetic field with the effective potential<ref>(b). In the eikonal regime, one can use the first-order WKB formulagiven byω = √(V_0 - i ( n+ 1/2) √(-2V_0^'')),where n=0, 1, 2 is the overtone number, V_0 is the effective potential at r=r_max and V_0^'' is the double derivative of effective potential with respect to radial coordinate r, evaluated at r=r_max. Using the Ref. <cit.> we find that the maximum of effective potential occurs at r=r_max which is given byr_max= 3r_0/2 +0.0387 r_0 p + 𝒪(p^2). Now, substituting r_max and equation <ref>(b) into equation (<ref>) one can obtainthe analytical expression for quasinormal frequency in the eikonal regime as ω = (1+2l) (1-0.0258 p) - i (1+2n) (1-0.0447 p) /3 √(3)r_0.In the limit p → 0, we obtained the analytical expression of quasinormal frequency for Schwarzschild black hole.§.§ Cubic GB-Coupling Functional: TEXTThe quasinormal modes of massless scalar (s=0) and electromagnetic (s=1)fields for l=1, 2 and l=1, 2, 3 are shown in Figs. <ref>-<ref>. From Figs. <ref>-<ref> one can be seen that the behaviour of real and imaginary parts of the quasinormal frequency as a function of parameter p is almostlinear for different values of l. Therefore, we use in-buildMathematica function FindFormula, to express real andimaginary parts of the quasinormal frequency as a function of parameter p for different values of l. In formula <ref> and <ref>the parameter p runs from p=0.0 to p=0.8. The approximated linear laws of scalar andelectromagnetic fields for different values of l are given byRe(ω_s=0, l=1)≈ 0.586345 -0.0231319 p,Im(ω_s=0, l=1)≈ 0.0236755 p-0.258812,Re(ω_s=0, l=2)≈ 0.967896 -0.0355368 p,Im(ω_s=0, l=2)≈ 0.0114912 p-0.193244.Re(ω_s=1, l=1)≈ 0.498397 -0.0219789 p,Im(ω_s=1, l=1)≈ 0.0134254p - 0.202685,Re(ω_s=1, l=2)≈ 0.916381 - 0.0344771 p,Im(ω_s=1, l=2)≈ 0.0147288 p - 0.239645,Re(ω_s=1, l=3)≈ 1.31503 - 0.0482737 p,Im(ω_s=1, l=3)≈ 0.0174108 p - 0.233444. The behaviour of real and imaginary parts of the quasinormal modes for massive scalarfields as a function of mass (m) is shown in Fig. <ref>. From Fig. <ref>(a) we can see that the real part of quasinormal frequency is an increasing function of mass m and the imaginary part of the quasinormal frequency (Fig. <ref>b) approaches zero, which indicates the existence of quasi-resonances. §.§.§ Analytical Formula For QNMS in the Eikonal Regime In the domain of high multipole numbers l (eikonal), test fields withvarying spin follow a common law up to the leading order. In this context,we examine an electromagnetic field with the effective potential<ref>(b). In the eikonal regime, one can use the first-order WKB formulagiven in equation (<ref>). With the help of Ref. <cit.> we find that the maximum of effective potential occurs at r=r_max which is given byr_max= 3r_0/2 +0.0637 r_0 p + 𝒪(p^2).Now, substituting r_max and equation <ref>(b) into equation (<ref>) one can obtain the analytical expression for quasinormal frequency in the eikonal regime for cubic GB-coupling functional asω = (1+2l) (1-0.0596 p) - i (1+2n) (1-0.1071 p) /3 √(3)r_0.In the limit p → 0, we obtained the analytical expression of quasinormal frequency for Schwarzschild black hole. §.§ Quartic GB-Coupling Functional: TEXTThe quasinormal modes of massless scalar (s=0) and electromagnetic (s=1)fields for l=1, 2 and l=1, 2, 3 are shown in Figs. <ref>-<ref>. From Figs. <ref>-<ref> one can be seen that the behaviour of real and imaginary parts of the quasinormal frequency as a function of parameter p is almostlinear for different values of l. Therefore, we use in-buildMathematica function FindFormula, to express real andimaginary parts of the quasinormal frequency as a function of parameter p for different values of l. In formula <ref> and <ref> the parameter p runs from p=0.0 to p=0.8. The approximated linear laws of scalar andelectromagnetic fields for different values of l are given byRe(ω_s=0, l=1)≈ 0.583828 -0.0130698 p,Im(ω_s=0, l=1)≈ 0.0230895 p-0.272239,Re(ω_s=0, l=2)≈ 1.00388 -0.0383934 p,Im(ω_s=0, l=2)≈ 0.00397625 p-0.192472.Re(ω_s=1, l=1)≈ 0.495561 -0.0117364 p,Im(ω_s=1, l=1)≈ 0.00632363 p-0.184499,Re(ω_s=1, l=2)≈ 0.943469 -0.0345489 p,Im(ω_s=1, l=2)≈ 0.00445784 p-0.189155,Re(ω_s=1, l=3)≈ 1.36289 -0.0510792 p,Im(ω_s=1, l=3)≈ 0.00418319 p-0.190336. The behaviour of real and imaginary parts of the quasinormal modes for massive scalar fields as a function of mass (m) is shown in Fig. <ref> for quartic GB-coupling functional. The real part of the quasinormal frequency is shown in Fig. <ref>(a), which is an increasing function of mass m. The imaginary part is shown in Fig. <ref>(b) and it approaches zero as mass increases. Therefore, the behaviour of the imaginary part of quasinormal frequency indicates the existence of quasi-resonances for quartic GB-coupling functional. §.§.§ Analytical Formula For QNMS in the Eikonal Regime In the domain of high multipole numbers l (eikonal), test fields withvarying spin follow a common law up to the leading order. In this context,we examine an electromagnetic field with the effective potential<ref>(b). In the eikonal regime, one can use the first-order WKB formulagiven in equation (<ref>). With the help of numerical analysis given in Ref. <cit.> we find that the maximum of effective potential occurs at r=r_max which is given byr_max= 3r_0/2 +0.0527 r_0 p + 𝒪(p^2). Now, substituting r_max and equation <ref>(b) into equation (<ref>) one can obtainthe analytical expression for quasinormal frequency in the eikonal regime as ω = (1+2l) (1-0.0570 p) - i (1+2n) (1-0.0971 p) /3 √(3)r_0. In the limit p → 0, we obtained the analytical expression of quasinormal frequency for Schwarzschild black hole. §.§Inverse GB-Coupling Functional: TEXTThe quasinormal modes of massless scalar (s=0) and electromagnetic (s=1)fields for l=1, 2 and l=1, 2, 3 are shown in Figs. <ref>-<ref>. From Figs. <ref>-<ref> one can be seen that the behaviour of real and imaginary parts of the quasinormal frequency as a function of parameter p is almostlinear for different values of l. Therefore, we use in-buildMathematica function FindFormula, to express real andimaginary parts of the quasinormal frequency as a function of parameter p for different values of l. In formula <ref> and <ref>the parameter p runs from p=0.0 to p=0.8. The approximated linear law of scalar andelectromagnetic fields for different values of l are given byRe(ω_s=0, l=1)≈ 0.58558 -0.0330224 p,Im(ω_s=0, l=1)≈ 0.0183859 p-0.194868,Re(ω_s=0, l=2)≈ 0.966147 -0.0558721 p,Im(ω_s=0, l=2)≈ 0.0231498 p-0.215501.Re(ω_s=1, l=1)≈ 0.495763 -0.0233709 p,Im(ω_s=1, l=1)≈ 0.0139608 p-0.184918,Re(ω_s=1, l=2)≈ 0.915373 -0.0367994 p,Im(ω_s=1, l=2)≈ 0.0122145 p - 0.188642,Re(ω_s=1, l=3)≈ 1.31159 -0.0728545 p,Im(ω_s=1, l=3)≈ 0.011762 p-0.189603.The behaviour of real and imaginary parts of the quasinormal modes for massive scalar fields as a function of mass (m) is shown in Fig. <ref> for inverse GB-coupling functional. From Fig. <ref>(b) we can see that the imaginary part of the quasinormal frequency slowly approacheszero as mass m increases, which indicates the existence of arbitrarily long-lived frequencies, called quasi-resonances. §.§.§ Analytical Formula For QNMS in the Eikonal RegimeIn the domain of high multipole numbers l (eikonal), test fields withvarying spin follow a common law up to the leading order. In this context,we examine an electromagnetic field with the effective potential<ref>(b). In the eikonal regime, one can use the first-order WKB formulagiven in equation (<ref>). Using the Ref. <cit.> we find that the maximum of effective potential occurs at r=r_max which is given byr_max= 3r_0/2 +0.0563 r_0 p + 𝒪(p^2). Now, substituting r_max and equation <ref>(b) into equation (<ref>) one can obtain the analytical expression for quasinormal frequency in the eikonal regime as ω = (1+2l) (1-0.0699 p) - i (1+2n) (1-0.1201 p) /3 √(3)r_0. In the limit p → 0, we obtained the analytical expression of quasinormal frequency for Schwarzschild black hole.§.§ Logarithmic GB-Coupling Functional: TEXT The quasinormal modes of massless scalar (s=0) and electromagnetic (s=1)fields for l=1, 2 and l=1, 2, 3 are shown in Figs. <ref>-<ref>. From Figs. <ref>-<ref> one can be seen that the behaviour of real and imaginary parts of the quasinormal frequency as a function of parameter p is almostlinear for different values of l. Therefore, we use in-buildMathematica function FindFormula, to express real andimaginary parts of the quasinormal frequency as a function of parameter p for different values of l. In formulas <ref> and <ref>the parameter p runs from p=0.0 to p=0.8. The approximated linear laws for scalar andelectromagnetic fields for different values of l are given byRe(ω_s=0, l=1)≈ 0.587079 -0.043569 p,Im(ω_s=0, l=1)≈ 0.0231869 p-0.195646,Re(ω_s=0, l=2)≈ 0.969141 -0.0766068 p,Im(ω_s=0, l=2)≈ 0.0245713 p-0.19412.Re(ω_s=1, l=1)≈ 0.497885 -0.0368047 p,Im(ω_s=1, l=1)≈ 0.0271431 p-0.187726,Re(ω_s=1, l=2)≈ 0.915132 -0.0449956 p,Im(ω_s=1, l=2)≈ 0.0277317 p-0.191956,Re(ω_s=1, l=3)≈ 1.31625 -0.103508 p,Im(ω_s=1, l=3)≈ 0.0252856 p-0.192324. The behaviour of real and imaginary parts of the quasinormal modes for massive scalar fields as a function of mass (m) is shown in Fig. <ref>. From Fig. <ref>(b) we can see that the imaginary part of the quasinormal frequency approaches zero, which indicates the existence of quasi-resonances. §.§.§ Analytical Formula For QNMS in the Eikonal RegimeIn the domain of high multipole numbers l (eikonal), test fields withvarying spin follow a common law up to the leading order. In this context,we examine an electromagnetic field with the effective potential<ref>(b). In the eikonal regime, one can use the first-order WKB formulagiven in equation (<ref>). Using the Ref. <cit.> we find that the maximum of effective potential <ref>(b) occurs at r=r_max which is given byr_max= 3r_0/2 +0.0544 r_0 p + 𝒪(p^2).Now, substituting r_max and equation <ref>(b) into equation (<ref>) one can obtain the analytical expression for quasinormal frequency in the eikonal regime for logarithmic GB-coupling functional as ω = (1+2l) (1-0.0690 p) - i (1+2n) (1-0.1181 p) /3 √(3)r_0. In the limit p → 0, we obtained the analytical expression of quasinormal frequency for Schwarzschild black hole. As one can see from this and previous analytical relations for the eikonal regime, the quasinormal modes are full reproduced by the WKB formula and, therefore, the correspondence between eikonal qnms and null geodesics suggested in <cit.> is indeed valid for the test fields under consideration.§ GREY–BODY FACTORS The strength of Hawking radiation does not fully reach adistant observer, it is partially suppressed by the effectivepotential surrounding the black holes and reflected to theblack hole event horizon. To find the number of particlesreflected due to the effective potential, it is essential todetermine the grey–body factors. This involves solving the classicalscattering problem to estimate the number of particles that undergoreflection. We will investigate the wave equation (<ref>) under theboundary conditions that allow the inclusion of incomingwaves from infinity,Ψ = e^-i ω r_* + R e^i ω r_*,r_*→∞,Ψ = T e^-i ω r_*, r_*→ -∞,where R and T are the reflection and transmission coefficients. This scenario is equivalent to the scattering of a waveoriginating from the horizon.The nature of the effective potential is such that it decreases at both infinity, therefore we can safely apply the WKB method<cit.> to findreflection and transmission coefficients. The reflection andtransmission coefficients satisfy the following relations |R|^2 +|T|^2=1. Once the reflection coefficient is computed, we can find thetransmission coefficient for each multipole number as follows |A_l|^2 =1- |R_l|^2. Numerous approaches are available in the literature for the computation of the transmissionand reflection coefficients. For an accurate computation of the transmissionand reflection coefficients, we employed the 6th-order WKB formula<cit.>. In accordancewith the findings presented in<cit.>, the reflectioncoefficient can be written asR= (1+e^-2iπ K)^-1/2, where K is defined by the following relations K - i (ω^2 - V_0)/√(-2V_0^'') - ∑_i=2^i=6Λ_i(K)=0, where V_0 is the effective potential at r=r_max, V_0^'' isthe double derivative of effective potential with respect to radial coordinate r,evaluated at r=r_max and Λ_i is the higher-order correction to the WKB formula. §.§ Quadratic GB-Coupling Functional The grey–body factors for quadratic GB-coupling functional are shown in Fig. <ref> as a function of ω for electromagnetic fields with different values of multiple numbers l. From Fig. <ref> it can be seen that the grey–body factor is higher for EsGB black holes compared to Schwarzschild black holes. As the parameter p increases transmission rate of the particles also increases, which is consistent with the nature of the effective potential, i.e., as the parameter p increases the height of the potential barrier becomes lower resulting in a high transmission rate for the particles and vice-versa. §.§ Cubic GB-Coupling Functional The grey–body factors for cubic GB-coupling functional are shown in Fig. <ref> as a function of ω for electromagnetic fields with different values of multiple numbers l. From Fig. <ref> it can be seen that the grey–body factor is higher for EsGB black holes compared to Schwarzschild black holes. As the parameter p increases transmission rate of the particles also increases, which is consistent with the nature of the effective potential, i.e., as the parameter p increases the height of the potential barrier becomes lower resulting in a high transmission rate for the particles and vice-versa.§.§ Quartic GB-Coupling Functional The grey–body factors for quartic GB-coupling functional are shown in Fig. <ref> as a function of ω for electromagnetic fields with different values of multiple numbers l. From Fig. <ref> it can be seen that the grey–body factor is higher for EsGB black holes compared to Schwarzschild black holes. As the parameter p increases transmission rate of the particles also increases, which is consistent with the nature of the effective potential, i.e., as the parameter p increases the height of the potential barrier becomes lower resulting in a high transmission rate for the particles and vice-versa. §.§ Inverse GB-Coupling Functional The grey–body factors for inverse GB-coupling functional are shown in Fig. <ref> as a function of ω for electromagnetic fields with different values of multiple numbers l. From Fig. <ref> it can be seen that the grey–body factor is higher for EsGB black holes compared to Schwarzschild black holes. As the parameter p increases transmission rate of the particles also increases, which is consistent with the nature of the effective potential, i.e., as the parameter p increases the height of the potential barrier becomes lower resulting in a high transmission rate for the particles and vice-versa.§.§ Logarithmic GB-Coupling Functional The grey–body factors for logarithmic GB-coupling functional are shown in Fig. <ref> as a function of ω for electromagnetic fields with different values of multiple numbers l. From Fig. <ref> it can be seen that the grey–body factor is higher for EsGB black holes compared to Schwarzschild black holes. As the parameter p increases transmission rate of the particles also increases, which is consistent with the nature of the effective potential, i.e., as the parameter p increases the height of the potential barrier becomes lower resulting in a high transmission rate for the particles and vice-versa. Here, we will write a brief analysis of how different GB-coupling functions affect the quasinormal modes of the black hole. From above analysis of the quasinormal modes of EsGB black hole as a function of parameter p, we see a strong deviation of qnms for both massless scalar and electromagnetic field compared to Schwarzschild black hole when GB-coupling functions are logarithmic. The deviation of quasinormal modes is lowest for quadratic GB-coupling functional. The transmission rate of the particles is highest for logarithmic GB-coupling functional (i.e., the strongest deviation of transmission rate occurs compared to Schwarzschild black hole) and lowest for quadratic GB-coupling functional (i.e., the lowest deviation of transmission rate occurs compared to Schwarzschild black hole).§ CONCLUSIONSIn this study, we address previously unexplored aspects of quasinormal modesfor scalar and electromagnetic fields in the vicinity of analytically obtainedEsGB black holes solutions. Here, we investigate: * The quasinormal modes of scalar and electromagnetic fields in the backgroundof EsGB black hole for different GB coupling functional (quadratic, cubicquartic, inverse, and Logarithmic). The dependence of the real and imaginary partsof the quasinormal modes on the parameter p is shown in Figures. For p=0,quasinormal modes of EsGB black hole match with Schwarzschild one. For non–zerovalues of p quasinormal frequency of EsGB black hole take smaller values comparedto Schwarzschild black hole.* We analyzed massive scalar fields in the EsGB background. Theheight of the effective potential for massive scalar fields is higher as the massincreases. For large values of mass, it shows undamped oscillations occur, whichcommonly known as quasi–resonances.* We derived an analytical formula of quasinormal frequency in the eikonalregime for each GB coupling function. In the limit p → 0,it reduced to the quasinormal frequency of Schwarzschild black hole. We have shown that the correspondence between the eikonal quasinormal frequencies and null geodesics <cit.> is hold in the EsGB theory for the test fields under consideration.* Finally, we studied the grey-body factors for electromagnetic fields withdifferent values of multiple numbers. Our analysis showed that for higher valuesof parameter p grey-body factors are larger. § ACKNOWLEDGEMENTSI would like to thank Roman Konoplya for useful discussions and help with the Mathematica WKB code.§ APPENDIX: ANALYTICAL EXPRESSIONS FOR THE METRIC FUNCTIONS UP TO FOURTH ORDERHere, we will write down the analytical expression for the metric functions up to the fourth order in the CFA method. Therefore, the metric functions are given byg_tt(r) ≈𝒩^1/𝒟^1(1- r_0/r), √(g_tt(r)g_rr(r))≈𝒩^2/𝒟^2,wherethe numerators, 𝒩^1, 𝒩^2 and denominators, 𝒟^1, 𝒟^2 are separately given for each GB couplingfunctional. §.§ Quadratic GB-Coupling Functional: TEXTg_tt(r) ≈𝒩^1_eve2/𝒟^1_eve2(1- r_0/r),√(g_tt(r)g_rr(r))≈𝒩^2_eve2/𝒟^2_eve2, where𝒩^1_eve2= p^11 (-0.027339 r^3 r_0+0.027339r^2r_0^2+0.027339 r r_0^3-0.027339 r_0^4)+p^10(-1.65064r^3r_0 +r^4+0.639158 r^2 r_0^2 -0.349358 r r_0^3+0.360842 r_0^4)+p^9 (-13.5993 r^4+25.9764 r^3 r_0-12.2194 r^2 r_0^2+1.96871rr_0^3-2.12636 r_0^4)+p^8 (66.7223 r^4-136.15 r^3 r_0+68.5796 r^2 r_0^2-6.96288 r r_0^3+7.81103 r_0^4) +p^7 (-142.79 r^4+317.427 r^3 r_0-172.292 r^2 r_0^2+15.8938 rr_0^3-18.2388 r_0^4)+p^6 (101.122 r^4-301.89 r^3 r_0 +197.051 r^2 r_0^2 -22.9918 r r_0^3+26.7025 r_0^4)+p^5 (120.603 r^4-78.7254 r^3 r_0-38.3503 r^2 r_0^2+20.5997 r r_0^3-24.0645 r_0^4) +p^4 (-295.148 r^4+437.019 r^3 r_0-143.957 r^2 r_0^2-10.8823 r r_0^3+12.7195 r_0^4) +p^3 (236.149 r^4-388.645 r^3 r_0 +153.456 r^2 r_0^2+3.01323 r r_0^3-3.47898 r_0^4) +p^2 (-85.9513 r^4 +147.171 r^3 r_0-61.7593 r^2 r_0^2-0.319982 r r_0^3 +0.336681 r_0^4)+p (12.5012 r^4-21.3996 r^3 r_0+9.1727 r^2 r_0^2+0.00359074 r r_0^3+0.0054195 r_0^4) -0.608781 r^4 +0.894103 r^3 r_0 -0.347264 r^2 r_0^2,𝒟^1_eve2= p^10 r^2 ( r^2-2 r r_0+ r_0^2)+p^9 r^2 (-13.5993 r^2+27.6187 r r_0-14.0194 r_0^2)+p^8 r^2 (66.7223 r^2-139.556 r r_0+72.8336 r_0^2)+p^7 r^2 (-142.79 r^2+319.724 r r_0-176.934 r_0^2)+p^6 r^2 (101.122 r^2-298.896 r r_0+197.772 r_0^2)+p^5 r^2 (120.603 r^2-85.7535 r r_0-34.8151 r_0^2)+p^4 r^2 (-295.148 r^2+442.573 r r_0-147.595 r_0^2)+p^3 r^2 (236.149 r^2-390.653 r r_0+154.893 r_0^2)+p^2 r^2 (-85.9513 r^2 +147.462 r r_0-61.9667 r_0^2)+p r^2 (12.5012 r^2-21.4138 r r_0+9.17934 r_0^2)+r^2 (-0.608781 r^2 +0.894103 r r_0-0.347264 r_0^2), 𝒩^2_eve2= p^9 ( r^3-2 r^2 r_0+ r r_0^2)+p^8 (-8.86767 r^3+17.7041 r^2 r_0-8.8364 r r_0^2)+p^7 (29.1248 r^3-57.7325 r^2 r_0+28.8414 r r_0^2-0.233666 r_0^3)+p^6 (-56.6765 r^3+110.257 r^2 r_0-54.735 r r_0^2+1.1405 r_0^3)+p^5 (90.3253 r^3-171.817 r^2 r_0+83.6835 r r_0^2-2.08071 r_0^3)+p^4 (-119.877 r^3+226.049 r^2 r_0-108.183 r r_0^2+1.65941 r_0^3)+p^3 (105.82 r^3-199.251 r^2 r_0+94.4276 r r_0^2-0.411268 r_0^3)+p^2 (-51.741 r^3+96.6288 r^2 r_0-45.2948 r r_0^2-0.132824 r_0^3)+p (12.1153 r^3-21.8254 r^2 r_0+9.91398 r r_0^2+0.058561 r_0^3)-1.22281 r^3+1.98757 r^2 r_0-0.817317 r r_0^2, 𝒟^2_eve2= p^9 r (1. r^2-2. r r_0+1. r_0^2)+p^8 r (-8.86767 r^2+17.7041 r r_0-8.8364 r_0^2)+p^7 r (29.1248 r^2-57.7325 r r_0+28.6077 r_0^2)+p^6 r (-56.6765 r^2+110.257 r r_0-53.5872 r_0^2)+p^5 r (90.3253 r^2-171.817 r r_0+81.5611 r_0^2)+p^4 r (-119.877 r^2+226.049 r r_0-106.429r_0^2)+p^3 r (105.82 r^2 -199.251 r r_0+93.9092 r_0^2)+p^2 r (-51.741 r^2+96.6288 r r_0-45.3674 r_0^2)+p r (12.1153 r^2-21.8254 r r_0+9.95903 r_0^2)+r (-1.22281 r^2+1.98757 r r_0-0.817317 r_0^2). §.§ Cubic GB-Coupling Functional: TEXTg_tt(r) ≈𝒩^1_odd3/𝒟^1_odd3(1- r_0/r),√(g_tt(r)g_rr(r))≈𝒩^2_odd3/𝒟^2_odd3,where𝒩^1_odd3= p^12 (0.00537634 r^3 r_0-0.00537634 r^2 r_0^2-0.00537634 r r_0^3+0.00537634 r_0^4)+p^11 (-0.0670504 r^3 r_0+0.0683756 r^2 r_0^2+0.0670504 r r_0^3-0.0683756 r_0^4)+p^10 (1. r^4-1.70036 r^3 r_0+0.683358 r^2 r_0^2-0.299635 r r_0^3+0.316642 r_0^4)+p^9 (-6.78182 r^4+13.1727 r^3 r_0 -6.31042 r^2 r_0^2+0.737004 r r_0^3-0.817608 r_0^4)+p^8 (17.7971 r^4-36.7027 r^3 r_0+18.714 r^2 r_0^2-1.12269 r r_0^3+1.3156 r_0^4)+p^7 (-21.7877 r^4+48.5071 r^3 r_0-26.4795 r^2 r_0^2+1.08356 r r_0^3-1.34315 r_0^4)+p^6 (9.51766 r^4-26.9583 r^3 r_0 +17.3344 r^2 r_0^2-0.638392 r r_0^3+0.834102 r_0^4)+p^5 (4.59852 r^4-2.91241 r^3 r_0-1.79709 r^2 r_0^2 +0.19656 r r_0^3-0.266067 r_0^4)+p^4 (-5.37404 r^4+8.71817 r^3 r_0-3.1644 r^2 r_0^2-0.0116997 r r_0^3 +0.0128034 r_0^4)+p^3 (0.540319 r^4-1.28023 r^3 r_0 +0.652328 r^2 r_0^2-0.00618425 r r_0^3+0.0102582 r_0^4)+p^2 (0.451911 r^4-0.727369 r^3 r_0+0.284135 r^2 r_0^2-0.00017735 r r_0^3+0.000404216 r_0^4)+p (0.0379465 r^4-0.0549725 r^3 r_0+0.0202381 r^2 r_0^2-0.0000201357 r r_0^3+0.0000179163 r_0^4)+0.000140052 r^4-0.0000971891 r^3 r_0,𝒟^1_odd3= p^10 r^2 (1. r^2-2. r r_0+1. r_0^2)+p^9 r^2 (-6.78182 r^2+13.8101 r r_0-7.02829 r_0^2)+p^8 r^2 (17.7971 r^2-37.355 r r_0+19.5579 r_0^2)+p^7 r^2 (-21.7877 r^2+48.6892 r r_0-26.9117 r_0^2)+p^6 r^2 (9.51766 r^2 -26.7448 r r_0+17.2901 r_0^2)+p^5 r^2 (4.59852 r^2-3.06912 r r_0-1.67657 r_0^2)+p^4 r^2 (-5.37404 r^2+8.71215 r r_0 -3.17516 r_0^2)+p^3 r^2 (0.540319 r^2-1.2622 r r_0+0.6403 r_0^2)+p^2 r^2 (0.451911 r^2-0.725183 r r_0+0.28313 r_0^2)+p r^2 (0.0379465 r^2-0.0549644 r r_0+0.0202407 r_0^2)+r^2 (0.000140052 r^2-0.0000971891 r r_0), 𝒩^2_odd3= p^9 (1. r^3-2. r^2 r_0+1. r r_0^2)+p^8 (-15.624 r^3+31.3197 r^2 r_0-15.587 r r_0^2-0.108754 r_0^3)+p^7 (75.8827 r^3-153.129 r^2 r_0+75.7277 r r_0^2+1.51139 r_0^3)+p^6 (-135.885 r^3+279.413 r^2 r_0-138.956 r r_0^2-4.46757 r_0^3)+p^5 (72.0953 r^3-161.504 r^2 r_0+83.7029 r r_0^2+5.22405 r_0^3)+p^4 (47.4153 r^3-75.3996 r^2 r_0+31.4976 r r_0^2-2.47728 r_0^3)+p^3 (-61.6661 r^3+111.092 r^2 r_0-50.8295 r r_0^2+0.253284 r_0^3)+p^2 (18.923 r^3-32.9427 r^2 r_0 +14.5941 r r_0^2+0.0638083 r_0^3)+p (-2.11497 r^3+3.11041 r^2 r_0-1.13326 r r_0^2+0.0010564 r_0^3)-0.0225954 r^3+0.0327384 r^2 r_0-0.0128335 r r_0^2, 𝒟^2_odd3= p^9 r (1. r^2-2. r r_0+1. r_0^2)+p^8 r (-15.624 r^2+31.3197 r r_0-15.6957 r_0^2)+p^7 r (75.8827 r^2-153.129 r r_0+77.2427 r_0^2)+p^6 r (-135.885 r^2+279.413 r r_0-143.469 r_0^2)+p^5 r (72.0953 r^2-161.504 r r_0 +89.0785 r_0^2)+p^4 r (47.4153 r^2-75.3996 r r_0+28.8021 r_0^2)+p^3 r (-61.6661 r^2+111.092 r r_0-50.4318 r_0^2)+p^2 r (18.923 r^2 -32.9427 r r_0+14.6226 r_0^2)+p r (-2.11497 r^2+3.11041 r r_0-1.13289 r_0^2)+r (-0.0225954 r^2+0.0327384 r r_0-0.0128335 r_0^2). §.§ Quartic GB-Coupling Functional: TEXTg_tt(r) ≈𝒩^1_eve4/𝒟^1_eve4(1- r_0/r), √(g_tt(r)g_rr(r))≈𝒩^2_eve4/𝒟^2_eve4, where 𝒩^1_eve4= p^15 (1. r^4-2.089 r^3 r_0+1.089 r^2 r_0^2+0.0889959 r r_0^3-0.0889959 r_0^4)+p^14 (-6.89482 r^4+14.7896 r^3 r_0-7.90886 r^2 r_0^2-0.757057 r r_0^3+0.771153 r_0^4)+p^13 (12.5106 r^4-29.4208 r^3 r_0 +17.0451 r^2 r_0^2+2.92063 r r_0^3-3.05555 r_0^4)+p^12 (19.4917 r^4-29.885 r^3 r_0+9.84773 r^2 r_0^2-6.69766 r r_0^3+7.24323 r_0^4)+p^11 (-115.113 r^4+224.275 r^3 r_0-107.925 r^2 r_0^2+9.97888 r r_0^3-11.206 r_0^4)+p^10 (198.927 r^4-410.675 r^3 r_0+210.013 r^2 r_0^2-9.86857 r r_0^3+11.562 r_0^4)+p^9 (-164.194 r^4+364.121 r^3 r_0-198.46 r^2 r_0^2+6.32619 r r_0^3-7.79975 r_0^4)+p^8 (45.2779 r^4-130.919 r^3 r_0+85.2 r^2 r_0^2-2.38976 r r_0^3+3.15916 r_0^4)+p^7 (28.2324 r^4-33.1842 r^3 r_0 +4.40023 r^2 r_0^2+0.342238 r r_0^3-0.525779 r_0^4)+p^6 (-21.3691 r^4+38.4559 r^3 r_0-16.3563 r^2 r_0^2+0.0898093 r r_0^3-0.104246 r_0^4)+p^5 (-0.197567 r^4-1.57336 r^3 r_0+1.45736 r^2 r_0^2-0.0324933 r r_0^3+0.0403357 r_0^4)+p^4 (2.04375 r^4-3.45917 r^3 r_0+1.43513 r^2 r_0^2-0.00124132 r r_0^3+0.00429355 r_0^4)+p^3 (0.276656 r^4-0.423294 r^3 r_0+0.159108 r^2 r_0^2+0.0000460001 r r_0^3+0.000142526 r_0^4)+p^2 (0.00912579 r^4-0.0124531 r^3 r_0+0.0040693 r^2 r_0^2-5.79588 × 10^-6 r r_0^3+5.6289 × 10^-6 r_0^4)+p (-0.0000509404 r^4+0.000117355 r^3 r_0-0.0000607436 r^2 r_0^2+4.44458 × 10^-8 r r_0^3-6.14644 × 10^-8 r_0^4)-2.63557 × 10^-7 r^4+1.63201 × 10^-8 r^3 r_0+ 1.79379 × 10^-7 r^2 r_0^2,𝒟^1_eve4= p^15 r^2 (1. r^2-2. r r_0+1. r_0^2)+p^14 r^2 (-6.89482 r^2+13.948 r r_0-7.05321 r_0^2)+p^13 r^2 (12.5106 r^2-26.1315 r r_0+13.6209 r_0^2)+p^12 r^2 (19.4917 r^2-36.7725 r r_0+17.2808 r_0^2)+p^11 r^2 (-115.113 r^2+232.465 r r_0-117.345 r_0^2)+p^10 r^2 (198.927 r^2-415.683 r r_0+216.716 r_0^2)+p^9 r^2 (-164.194 r^2+364.547 r r_0 -200.316 r_0^2)+p^8 r^2 (45.2779 r^2-129.549 r r_0+84.468 r_0^2)+p^7 r^2 (28.2324 r^2-33.7905 r r_0+4.977 r_0^2)+p^6 r^2 (-21.3691 r^2+38.3498 r r_0-16.341 r_0^2)+p^5 r^2 (-0.197567 r^2-1.50422 r r_0+1.40334 r_0^2)+p^4 r^2 (2.04375 r^2-3.44431 r r_0+1.42689 r_0^2)+p^3 r^2 (0.276656 r^2-0.422662 r r_0+0.158858 r_0^2)+p^2 r^2 (0.00912579 r^2-0.0124564 r r_0+0.00407385 r_0^2)+p r^2 (-0.0000509404 r^2+0.000117338 r r_0 -0.0000607602 r_0^2) +r^2 (-2.63557 × 10^-7 r^2+1.63201 × 10^-8 r r_0+1.79379 × 10^-7 r_0^2), 𝒩^2_eve4= p^10 (1. r^3-2. r^2 r_0+1. r r_0^2)+p^9 (-13.6797 r^3+27.4429 r^2 r_0-13.6766 r r_0^2-0.0865879 r_0^3)+p^8 (67.4124 r^3-135.988 r^2 r_0+67.7236 r r_0^2+0.852114 r_0^3)+p^7 (-148.278 r^3+302.545 r^2 r_0-151.49 r r_0^2-2.77603 r_0^3)+p^6 (146.99 r^3-308.586 r^2 r_0+157.352 r r_0^2+4.22482 r_0^3)+p^5 (-36.5144 r^3+91.761 r^2 r_0 -51.9412 r r_0^2-3.22666 r_0^3)+p^4 (-41.1576 r^3+69.4003 r^2 r_0-29.5381 r r_0^2+1.13594 r_0^3)+p^3 (28.9496 r^3-53.2293 r^2 r_0+24.5671 r r_0^2-0.117809 r_0^3)+p^2 (-5.11947 r^3+9.28066 r^2 r_0-4.24613 r r_0^2-0.00574379 r_0^3)+p (0.391241 r^3-0.618344 r^2 r_0+0.245794 r r_0^2-0.0000557057 r_0^3)+0.00525892 r^3-0.00809504 r^2 r_0 +0.00330483 r r_0^2, 𝒟^2_eve4= p^10 r (1. r^2-2. r r_0+1. r_0^2)+p^9 r (-13.6797 r^2+27.4429 r r_0-13.7632 r_0^2)+p^8 r (67.4124 r^2-135.988 r r_0+68.5757 r_0^2)+p^7 r (-148.278 r^2+302.545 r r_0-154.267 r_0^2)+p^6 r (146.99 r^2-308.586 r r_0 +161.585 r_0^2)+p^5 r (-36.5144 r^2+91.761 r r_0-55.1917 r_0^2)+p^4 r (-41.1576 r^2+69.4003 r r_0-28.3697 r_0^2)+p^3 r (28.9496 r^2-53.2293 r r_0+24.4286 r_0^2)+p^2 r (-5.11947 r^2+9.28066 r r_0-4.24704 r_0^2)+p r (0.391241 r^2-0.618344 r r_0+0.245858 r_0^2)+r (0.00525892 r^2-0.00809504 r r_0+0.00330483 r_0^2). §.§ Inverse GB-Coupling Functional: TEXTg_tt(r) ≈𝒩^1_inv/𝒟^1_inv(1- r_0/r), √(g_tt(r)g_rr(r))≈𝒩^2_inv/𝒟^2_inv, where 𝒩^1_inv= p^13 (0.00630915 r^3 r_0-0.00630915 r^2 r_0^2-0.00630915 r r_0^3+0.00630915 r_0^4)+p^12 (-0.288247 r^3 r_0+0.289835 r^2 r_0^2+0.288247 r r_0^3-0.289835 r_0^4)+p^11 (1. r^4+1.44067 r^3 r_0-2.51152 r^2 r_0^2-2.56824 r r_0^3+2.6391 r_0^4)+p^10 (-33.5247 r^4+55.6895 r^3 r_0-21.3738 r^2 r_0^2+8.14436 r r_0^3-8.93465 r_0^4)+p^9 (139.634 r^4-272.59 r^3 r_0+130.873 r^2 r_0^2-10.712 r r_0^3+12.9426 r_0^4)+p^8 (-211.343 r^4+445.224 r^3 r_0-232.351 r^2 r_0^2+3.40437 r r_0^3-5.11853 r_0^4)+p^7 (104.307 r^4-239.683 r^3 r_0+134.352 r^2 r_0^2+4.79121 r r_0^3-5.8472 r_0^4)+p^6 (36.8094 r^4-69.4903 r^3 r_0+37.49 r^2 r_0^2-4.15303 r r_0^3+5.6248 r_0^4)+p^5 (-48.322 r^4+99.557 r^3 r_0-57.1551 r^2 r_0^2+0.840865 r r_0^3-0.70925 r_0^4)+p^4 (12.9333 r^4-21.1041 r^3 r_0+8.6512 r^2 r_0^2-0.0658137 r r_0^3-0.217336 r_0^4)+p^3 (0.768694 r^4-0.492765 r^3 r_0+1.99322 r^2 r_0^2+0.0218888 r r_0^3-0.0839553 r_0^4)+p^2 (-2.1015 r^4+1.63385 r^3 r_0-0.221489 r^2 r_0^2+0.0142061 r r_0^3-0.0117263 r_0^4)+p (-0.155143 r^4+0.0901895 r^3 r_0-0.0282534 r^2 r_0^2+0.000352495 r r_0^3-0.000442358 r_0^4)-0.005224 r^4+0.00352893 r^3 r_0-0.000651281 r^2 r_0^2,𝒟^1_inv= p^11 r^2 (1. r^2-2. r r_0+1. r_0^2)+p^10 r^2 (-33.5247 r^2+67.3011 r r_0-33.7764 r_0^2)+p^9 r^2 (139.634 r^2-287.439 r r_0+147.805 r_0^2)+p^8 r^2 (-211.343 r^2+449.189 r r_0-237.795 r_0^2)+p^7 r^2 (104.307 r^2-233.152 r r_0+127.187 r_0^2)+p^6 r^2 (36.8094 r^2-74.6521 r r_0+43.2038 r_0^2)+p^5 r^2 (-48.322 r^2+100.669 r r_0-57.9912 r_0^2)+p^4 r^2 (12.9333 r^2-20.9337 r r_0+8.67754 r_0^2)+p^3 r^2 (0.768694 r^2-0.696371 r r_0+1.94788 r_0^2)+p^2 r^2 (-2.1015 r^2+1.61854 r r_0-0.227976 r_0^2)+p r^2 (-0.155143 r^2+0.0896651 r r_0-0.0284235 r_0^2)+r^2 (-0.005224 r^2+0.00352893 r r_0 -0.000651281 r_0^2), 𝒩^2_inv= p^9 (1. r^3-2.16356 r^2 r_0+1.01501 r r_0^2+0.148553 r_0^3)+p^8 (-8.63116 r^3+19.3261 r^2 r_0 -9.37725 r r_0^2-1.31772 r_0^3)+p^7 (25.4073 r^3-61.4392 r^2 r_0+31.2754 r r_0^2+4.7492 r_0^3)+p^6 (-25.4485 r^3+80.1206 r^2 r_0-45.5637 r r_0^2-9.03094 r_0^3)+p^5 (-18.958 r^3-9.26834 r^2 r_0 +18.1535 r r_0^2+9.73502 r_0^3)+p^4 (67.5176 r^3-88.9673 r^2 r_0+28.0939 r r_0^2-5.8473 r_0^3)+p^3 (-62.3959 r^3+97.8017 r^2 r_0-38.225 r r_0^2+1.71881 r_0^3)+p^2 (25.6081 r^3-41.9346 r^2 r_0 +17.3586 r r_0^2-0.140684 r_0^3)+p (-4.42145 r^3+6.8935 r^2 r_0-2.85028 r r_0^2-0.0149354 r_0^3)+0.322118 r^3-0.368892 r^2 r_0+0.119768 r r_0^2, 𝒟^2_inv= p^9 r (1. r^2-2.16356 r r_0+1.16356 r_0^2)+p^8 r (-8.63116 r^2+19.3261 r r_0-10.695 r_0^2)+p^7 r (25.4073 r^2-61.4392 r r_0+36.0281 r_0^2)+p^6 r (-25.4485 r^2+80.1206 r r_0-54.6246 r_0^2)+p^5 r (-18.958 r^2-9.26834 r r_0+27.9908 r_0^2)+p^4 r (67.5176 r^2-88.9673 r r_0+22.0663 r_0^2)+p^3 r (-62.3959 r^2+97.8017 r r_0-36.3316 r_0^2)+p^2 r (25.6081 r^2-41.9346 r r_0+17.1297 r_0^2)+p r (-4.42145 r^2+6.8935 r r_0-2.84697 r_0^2)+r (0.322118 r^2-0.368892 r r_0+0.119768 r_0^2).§.§ Logarithmic GB-Coupling Functional: TEXTg_tt(r) ≈𝒩^1_log/𝒟^1_log(1- r_0/r), √(g_tt(r)g_rr(r))≈𝒩^2_log/𝒟^2_log,where 𝒩^1_log= p^13 (-0.0626084 r^3 r_0+0.0626084 r^2 r_0^2+0.0626084 r r_0^3-0.0626084 r_0^4)+p^12 (1. r^4-0.782889 r^3 r_0-0.230599 r^2 r_0^2-1.21711 r r_0^3+1.2306 r_0^4)+p^11 (-18.2212 r^4+29.3638 r^3 r_0-10.8868 r^2 r_0^2+7.28478 r r_0^3-7.54055 r_0^4)+p^10 (96.59 r^4-177.29 r^3 r_0+79.5585 r^2 r_0^2-19.3886 r r_0^3+20.5203 r_0^4)+p^9 (-234.614 r^4+457.171 r^3 r_0-221.897 r^2 r_0^2+23.902 r r_0^3-24.2931 r_0^4)+p^8 (282.436 r^4-551.051 r^3 r_0+275.112 r^2 r_0^2-7.20824 r r_0^3-1.91936 r_0^4)+p^7 (-133.991 r^4+164.055 r^3 r_0-46.0945 r^2 r_0^2-13.4629 r r_0^3+41.7195 r_0^4)+p^6 (-32.4983 r^4+341.82 r^3 r_0-300.797 r^2 r_0^2+12.9375 r r_0^3-53.4003 r_0^4)+p^5 (34.5271 r^4-420.143 r^3 r_0+404.167 r^2 r_0^2+0.0115351 r r_0^3+31.9943 r_0^4)+p^4 (28.0229 r^4+178.063 r^3 r_0-241.727 r^2 r_0^2-4.86976 r r_0^3-8.90541 r_0^4)+p^3 (-30.9869 r^4-12.6459 r^3 r_0+69.661 r^2 r_0^2+2.21647 r r_0^3+0.507243 r_0^4)+p^2 (8.38109 r^4-10.7164 r^3 r_0-6.17049 r^2 r_0^2-0.251471 r r_0^3+0.12598 r_0^4)+p (-1.03581 r^4+2.49996 r^3 r_0-0.826428 r^2 r_0^2-0.0167726 r r_0^3+0.0233143 r_0^4)+0.389358 r^4-0.281216 r^3 r_0+0.0686144 r^2 r_0^2,𝒟^1_log= p^12 r^2 (1. r^2-2. r r_0+1. r_0^2)+p^11 r^2 (-18.2212 r^2+36.6579 r r_0-18.4367 r_0^2)+p^10 r^2 (96.59 r^2-197.003 r r_0+100.413 r_0^2)+p^9 r^2 (-234.614 r^2+483.3 r r_0-248.622 r_0^2)+p^8 r^2 (282.436 r^2-565.092 r r_0+281.412 r_0^2)+p^7 r^2 (-133.991 r^2+161.605 r r_0-19.988 r_0^2)+p^6 r^2 (-32.4983 r^2+345.678 r r_0-336.459 r_0^2)+p^5 r^2 (34.5271 r^2-417.829 r r_0+424.172 r_0^2)+p^4 r^2 (28.0229 r^2+175.038 r r_0-246.175 r_0^2)+p^3 r^2 (-30.9869 r^2-11.7993 r r_0+69.4436 r_0^2)+p^2 r^2 (8.38109 r^2-10.8149 r r_0-6.01301 r_0^2)+p r^2 (-1.03581 r^2+2.54012 r r_0-0.815274 r_0^2)+r^2 (0.389358 r^2-0.281216 r r_0+0.0686144 r_0^2), 𝒩^2_log= p^9 (1. r^3-1.5934 r^2 r_0+0.9878 r r_0^2-0.394399 r_0^3)+p^8 (-11.4341 r^3+18.4084 r^2 r_0 -9.70115 r r_0^2+2.72685 r_0^3)+p^7 (52.5869 r^3-85.6446 r^2 r_0+40.9887 r r_0^2-7.89466 r_0^3)+p^6 (-129.875 r^3+213.791 r^2 r_0-96.5185 r r_0^2+12.2873 r_0^3)+p^5 (190.394 r^3-316.075 r^2 r_0 +137.73 r r_0^2-10.895 r_0^3)+p^4 (-170.683 r^3+284.439 r^2 r_0-121.325 r r_0^2+5.25315 r_0^3)+p^3 (92.159 r^3-152.461 r^2 r_0+64.1216 r r_0^2-1.0636 r_0^3)+p^2 (-28.3648 r^3+45.1052 r^2 r_0 -18.6363 r r_0^2-0.0535942 r_0^3)+p (4.63158 r^3-6.37023 r^2 r_0+2.46462 r r_0^2+0.0339455 r_0^3) -0.413973 r^3+0.400205 r^2 r_0-0.112264 r r_0^2, 𝒟^2_log= p^9 r (1. r^2-1.5934 r r_0+0.593401 r_0^2)+p^8 r (-11.4341 r^2+18.4084 r r_0-6.9743 r_0^2)+p^7 r (52.5869 r^2-85.6446 r r_0+33.0761 r_0^2)+p^6 r (-129.875 r^2+213.791 r r_0-84.1127 r_0^2)+p^5 r (190.394 r^2-316.075 r r_0+126.509 r_0^2)+p^4 r (-170.683 r^2+284.439 r r_0-115.594 r_0^2)+p^3 r (92.159 r^2-152.461 r r_0+62.6652 r_0^2)+p^2 r (-28.3648 r^2+45.1052 r r_0-18.518 r_0^2)+p r (4.63158 r^2-6.37023 r r_0+2.46727 r_0^2)+r (-0.413973 r^2+0.400205 r r_0-0.112264 r_0^2). | http://arxiv.org/abs/2312.16479v2 | {
"authors": [
"Prosenjit Paul"
],
"categories": [
"gr-qc",
"astro-ph.HE",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20231227090546",
"title": "Quasinormal modes of Einstein--scalar--Gauss--Bonnet black holes"
} |
6mm The existence of a median-type ternary operation on a metric space is known to have a number of implications for the geometry of the space. For such operations, if two of the three arguments coincide, they also coincide with the output of the operation. We consider ternary operations with the opposite property: if two of the arguments coincide, the output is equal to the third one. The existence of such an operation is a necessary condition for the space to be an absolute retract. [ [ =====§ INTRODUCTIONIn 1979, van Mill and van de Vel <cit.> launched the investigation of topological spaces X that admit a continuous ternary operation σ with the absorption property: σ(a, a, b)=σ(a, b, a)=σ(b, a, a)=a ∀ a, b∈ X.Such an operation, called a mixer in <cit.>, is inherited by any retract of X, thus providing a strong necessary condition for a continuum to be an absolute retract <cit.>. The prototypical example of a mixer is the coordinate-wise median on the Hilbert cube [0, 1]^∞. In this paper we compare (<ref>) with its natural counterpart, the anti-absorption property:τ(a, a, b)=τ(a, b, a)=τ(b, a, a) = b ∀ a, b∈ X.We call a continuous mapτ X^3→ X with the property (<ref>) a co-mixer. The prototypical example of such an operation is the symmetric difference of sets: τ(A, B, C) = A B C. When X is a metric space, one often considers its Lipschitz retracts instead of merely continuous ones. Such retracts inherit Lipschitz (co-)mixers from the original space.Every absolute Lipschitz retract admits a Lipschitz mixer, and the converse holds under additional assumptions <cit.>.In this paper we will show that:* a connected finite CW-complex with a co-mixer must be contractible (<ref>)* a metric space with a Lipschitz co-mixer may have nontrivial fundamental group (<ref>) * a metric space may admit a Lipschitz mixer without a Lipschitz co-mixer, and vice versa (<ref>–<ref>)* normed vector spaces, as well as absolute Lipschitz retracts, admit a Lipschitz co-mixer (<ref>) As an application, in <ref> we give a short proof of a theorem of Akofor <cit.> on the Lipschitz retraction of finite subsets of normed spaces. § TOPOLOGICAL CONSIDERATIONS We begin with a topological lemma involving the concept of an H-space: a topological space with a continuous binary operation with a two-sided unit. The book <cit.> contains the necessary background;in particular, see <cit.> on H-spaces. Suppose that X is a topological space with a co-mixer τ. Then for every point e∈ X and every n∈ℕ the homotopy group π_n(X, e) satisfies 2α = 0 for all α∈π_n(X, e).If, in addition, X is a connected finite CW-complex, then it is contractible. The binary operation μ(x, y) = τ(x, y, e) satisfies μ(x, e) = x = μ(e, x) for all x∈ X and thus makes X an H-space. By the Eckmann-Hilton argument, the group operation on π_n(X, e) is commutative and agrees with the one induced by μ (see Theorems 3.1 and 3.2 in <cit.>). Since μ(x, x) = e for all x∈ X, it follows that 2α = 0 for all α∈π_n(X, e). If, in addition, X is a connected finite CW-complex, then the fact that 2π_n(X, e)=0 for all n implies that X is contractible <cit.>.Every topological space with a mixer has trivial homotopy groups of all orders <cit.> (the theorem is stated for continua but the proof works in greater generality). In contrast, some spaces with a co-mixer are not simply-connected. There exists a connected metric space X with a Lipschitz co-mixer and π_1(X)≅ℤ/(2ℤ). Let λ be the Lebesgue measure on [0, 1]. Let ℳ be the associated measure algebra, i.e., the set of all measurable subsets of [0, 1] modulo nullsets. When endowed with the metric ρ(A, B)= λ(A B), the measure algebra becomes a contractible metric space. It admits both a Lipschitz mixer: σ(A, B, C) =(A∩ B)∪ (A∩ C)∪ (B∩ C)and a Lipschitz co-mixer: τ(A, B, C) = A B C. The set complement operation is an isometric involution on ℳ. By identifying each set with its complement we obtain a quotient space (ℳ, ρ) where ρ([A], [B]) = min(λ(A B), λ(A B^c)).By construction π_1(ℳ) ≅ℤ/(2ℤ), which implies that ℳ does not admit a mixer. In particular, (<ref>) does not descend to the quotient. In contrast, (<ref>) does: the induced co-mixer τ([A], [B], [C]) = [A B C]is well-defined on ℳ^3, and it is easy to see that τ is 1-Lipschitz in each argument. In the context of nonsmooth metric spaces, the concept of Lipschitz homotopy is sometimes preferable to continuous homotopy <cit.>. It is thus relevant to note that the Lipschitz fundamental group of the space in Proposition <ref> is also nontrivial. Indeed, the curve γ(t) = [0, t], 0≤ t≤ 1, is a geodesic between ∅ and [0, 1] in ℳ, and its image in the quotient space is a noncontractible rectifiable loop based at ∅. § METRIC CONSIDERATIONS When (X, d) is a metric space and E⊂ X, we define the displacement of a map f E→ X as (f) = sup_x∈ E d(f(x), x). A map f X→ Y is called L-Lipschitz if d_Y(f(a), f(b))≤ L d_X(a, b) for all a,b∈ X. Suppose that (X, d) is a metric space with a co-mixer τ that is L-Lipschitz in each variable. Then for any two points a, b∈ X there exists an L-Lipschitz map f X→ X such that f(a)=b, f(b)=a, and (f)≤ Ld(a, b).Let f(x)=τ(x, a, b). The displacement bound follows from the Lipschitz property of τ and the fact that τ(x, a, a)=x for all x. The other claimed properties of f are also easy to see.The map f in Lemma <ref> need not have a Lipschitz inverse or even be injective; thus, this property is different from Lipschitz homogeneity <cit.>. But it does imply some kind of homogeneity of the space, as the following lemma demonstrates.Let Γ be a metric arc with endpoints a, b; that is, Γ is the image of a homeomorphism ϕ [0, 1]→Γ such that ϕ(0)=a and ϕ(1)=b. Suppose that Γ is unrectifiable but ϕ([ε, 1]) is rectifiable for every ε>0. Then Γ does not admit a Lipschitz co-mixer. The image of a rectifiable arc under a Lipschitz map is also rectifiable. Thus, any Lipschitz map fΓ→Γ with f(b)=a must be identically equal to a. This shows that Γ fails the conclusion of Lemma <ref>. A metric space (X, d) has bounded turningif there exists a constant C such that any two points a, b∈ X are contained in some compact connected set E⊂ X with E ≤ Cd(a, b).Corollary 3.3 in <cit.> shows that a metric arc has bounded turning if and only if it admits a Lipschitz mixer. On the other hand, Example 5.5 in <cit.> shows how to construct a metric arc Γ of bounded turning which satisfies the assumption of Lemma <ref>. We thus obtain a metric arc with a Lipschitz mixer but no Lipschitz co-mixer. If one is willing to accept a non-connected space, then a simpler example of this kind is available. First, note that every subset of ℝ admits a Lipschitz mixer, namely the median map (which we denote by ). Let X=ℝ∖ (-1, 1) with the standard metric. Suppose that f X→ X is a Lipschitz map with f(1)=-1. By continuity, f(x)≤ -1 for all x≥ 1. Hence (f)=∞, which by Lemma <ref> implies that X has no Lipschitz co-mixer. It is not clear if there is a geometric description of the subsets of ℝ that admit a Lipschitz co-mixer. For example, every additive subgroup of ℝ does: let τ(a, b, c) = a+b+c-2(a, b, c).§ TERNARY OPERATIONS IN NORMED SPACES The authors of <cit.> asked “whether every Banach space has a “natural” mixer”, thus contrasting their existence theorem <cit.> with the lack of a direct construction. Dranishnikov <cit.> sketched the following construction of a mixer in an arbitrary normed space, attributed to E. V. Shchepin.Given a normed space X,define the incenter mixer σ X^3→ X by the formulaσ(a, b, c)=b-ca + a-cb+a-bc/b-c + a-c+a-b,extended by σ(a, a, a)=a.When X is a Euclidean space, σ is the center of the inscribed circle in the triangle abc. It is the weighted average of a, b, c where the weight of each vertex is the length of the opposite side. The mixer property of σ is immediate from (<ref>). The Lipschitz continuity is less obvious, especially considering that the circumcenter fails this property. In any normed space X, the incenter mixer (<ref>) is 1-Lipschitz in each variable separately. By the translation invariance of σ, we can place one of the points at 0, so that the mixer is computed for the triple 0, a, x with a fixed nonzero vector a∈ X. LetF(x) := σ(0, a, x) = ax + xa/a+x+x-a.By the homogeneity of F it suffices to consider the case a=1. This simplifies (<ref>) to F(x) = x + xa/pwherep = 1+x+x-a.Since p is bounded from below, the map F is locally Lipschitz, hence absolutely continuous on lines. It remains to estimate its directional derivative D_u F for a unit vector u. We havep^2 D_u F(x) = p (u + aD_ux)- (x+xa) (D_ux+D_ux-a) = pu+ (a-x+x-aa) D_ux - (x+xa)D_ux-a,hence p^2 D_u F(x)≤p+ 2x-a + 2x.Since p≥ 2, it follows thatp^2 = p + p(x-a + x) ≥ p + 2x-a + 2xwhich proves that D_uF(x)≤ 1.An alternative construction of a Lipschitz mixer is available in certain sequence spaces: take the component-wise median of three vectors a, b, c. In ℓ^∞ this construction produces a mixer with a stronger Lipschitz property than the one inLemma <ref>: (a, b, c) - (a', b', c')≤max(a-a', b-b, c-c').However, (a, b, c) does not necessarily lie in the affine span of a, b, c and therefore the median mixer is not inherited by subspaces. It remains unclear which normed spaces admit a mixer that is jointly nonexpanding in the sense of (<ref>). Given a normed space X,define the Nagel co-mixerτ X^3→ X by the formulaτ(a, b, c) = a+b+c - 2σ(a, b, c)where σ is as in (<ref>). In the context of Euclidean spaces τ(a, b, c) is the Nagel point of triangle abc, i.e., the point at which three perimeter bisectors meet <cit.>. It can be expressed as the weighted average of a, b, c where the weight of each vertex is the sum of two adjacent sides minus the opposite side. Both the co-mixer property and the Lipschitz continuity of τ (with constant 3) follow from (<ref>) at once. However, the Lipschitz constant can be improved.In any normed space X, the Nagel co-mixer (<ref>) is 1-Lipschitz in each variable separately. As in the proof of Lemma <ref>, it suffices to consider, for a fixed vector with a=1,the map G(x):=τ(0, a, x) = a + x - 2 F(x)where F(x)=p^-1 (x + xa) with p = 1+x+x-a. Because of (<ref>), the directional derivative D_uG is given by p^2 D_u G(x) = (p^2 - 2p)u- 2(a-x+x-aa) D_ux +2(x+xa)D_ux-aRearrange this as a linear combination of u, x, and a: p^2 D_u G(x) = (p^2 - 2p)u+ 2(D_ux+D_ux-a)x + 2(xD_ux-a-(1+x-a)D_ux)a.If the coefficient of a in (<ref>) is nonnegative, then the triangle inequality yieldsp^2 D_u G(x) ≤ p^2 - 2p + 2(D_ux+D_ux-a)x + 2 xD_ux-a - 2(1+x-a)D_ux≤ p^2 - 2p + 4 xD_ux-a≤ p^2 - 2p + 4x≤ p^2since p≥ 2x. If the coefficient of a in (<ref>) is negative, then p^2 D_u G(x) ≤ p^2 - 2p+2(D_ux)x +2(1+x-a)D_ux≤ p^2 - 2p +2x+2(1+x-a) = p^2.In either case we have D_u G(x)≤ 1, completing the proof.Both the incenter mixer and the Nagel co-mixer of a, b, c belong to the convex hull of the set {a, b, c}.Every absolute Lipschitz retract X can be realized as a retract of Banach space ℓ_∞(I) for some index set I <cit.>. Lemma <ref> implies that X admits a Lipschitz co-mixer. The existence of a Lipschitz mixer on such spaces is well-known <cit.>.The ternary symmetric difference (<ref>) is intertwined with the Nagel co-mixer (<ref>) in the following way. The Lebesgue measure λ is a map from ℳ onto [0, 1] with a right inversegiven by φ(t) = [0, t]. For any a, b, c∈ [0, 1] we have λ(φ(a)φ(b)φ(c)) = a+b+c - 2(a, b, c). § RETRACTIONS OF FINITE SUBSET SPACES Any metric space X can be viewed as the first member of an infinite sequence of nested metric spaces (X(n), ), n=1, 2, …, where the elements of X(n) are nonempty subsets of X with at most n elements, andis the Hausdorff metric. Given the natural isometric embeddings X(n)→ X(n+1) one may ask whether Lipschitz retractions X(n+1)→ X(n) also exist; this turns out to be a difficult question <cit.>. For example, the answer remains unknown when X is a general normed space; the case of Hilbert spaces was settled in <cit.>. The first nontrivial case of this problem, n=2, has been solved by Akofor <cit.> who constructed a 731-Lipschitz retraction X(3)→ X(2) for any normed space X. Here we present a simpler construction with a smaller Lipschitz constant. We say that a (co-)mixer is symmetric if its value does not depend on the order of the three variables. Both the incenter mixer and the Nagel co-mixer are symmetric. The following lemma relates symmetric (co-)mixers to retractions.Let (X, d) be a metric space with a symmetric mixer σ and a symmetric co-mixer τ. If both σ and τ are L-Lipschitz with respect to each argument, then the mapρ({a, b, c}) = {σ(a, b, c), τ(a, b, c) } is a 9L-Lipschitz retraction from X(3) onto X(2). A set E⊂ X with fewer than 3 elements can be written as {a, b, c} by listing the same point more than once. The definition of ρ shows that regardless of which point was repeated, ρ(E) = E.Given any two sets A, B∈ X(3), let δ=(A, B) and pick two maps f A→ B and g B→ A such that (f)≤δ and (g)≤δ. Define h B→ B so that h(b)=b when b∈ f(A) and h(b)=f(g(b)) otherwise. Note that h(B)=f(A) and (h)≤ 2δ. Using the Lipschitz property of σ and τ together with the displacement bounds for f and h, we obtain (ρ(A), ρ(f(A))) ≤ 3Lδ and(ρ(B), ρ(h(B))) ≤ 6Lδ. By the triangle inequality, (ρ(A), ρ(B))≤9Lδ as claimed.Combining Lemmas <ref>,<ref> and <ref>, we obtain a sharper version of <cit.>. Every normed space X admits a 9-Lipschitz retraction X(3)→ X(2) such that the image of every set A∈ X(3) is contained in the convex hull of A.plain | http://arxiv.org/abs/2312.16146v1 | {
"authors": [
"Leonid V. Kovalev"
],
"categories": [
"math.MG",
"51F30 (Primary) 30L05, 54B20, 54C15 (Secondary)"
],
"primary_category": "math.MG",
"published": "20231226181126",
"title": "Anti-absorbing ternary operations on metric spaces"
} |
We study the problem of universality in the anyon model described by the SU(2) Witten-Chern-Simons theory at level k. A classic theorem of Freedman-Larsen-Wang states that for k ≥ 3,k ≠ 4, braiding of the anyons of topological charge 1/2 is universal for topological quantum computing. For the case of one qubit, we prove a stronger result that double-braiding of such anyons alone is already universal. [NO \title GIVEN] [NO \author GIVEN] January 14, 2024 ======================§ INTRODUCTIONTopological quantum computingis an approach to building a fault-tolerant quantum computer using some quasi-particles in two dimensions, called anyons. The physical systems that host anyons are 2-dimensional topological phases of matter. Topological phases are gapped quantum phases that go beyond Landau's theory of symmetry breaking and local order parameters; instead, they obey a new order called topological quantum order <cit.>. Such phases exhibitseveral remarkable properties including robust ground state degeneracy depending on the topology of the underlying system, long-range entanglement, protected gapless edge modes, fractionalized quasi-particle excitations, and exotic exchange statistics. The robustness of the ground/excited state space provides an ideal place to store quantum information as logical qubits. Braiding of anyons, a process of adiabatically interchanging anyons, induces a unitary transformation on the state space. These unitary transformations remain unchanged under local deformations of the braiding world-lines, and hence serve as logical quantum gates. This method of encoding and manipulating information in global degrees of anyons is called topological quantum computing <cit.>, and it has the advantage of achieving fault-tolerence at the `hardware' level.An important family of topological phases are described by the Witten-Chern-Simons theory associated with the Lie group SU(2) and a level k ∈ℤ_≥ 0 <cit.>. Denote the theory by (2)_k. It is among the earliest-studied anyon models. By a classic result of <cit.>, except for a few values of k, the model is universal for topological quantum computing, i.e., it is equivalent to the standard circuit model. Furthermore, the theory has various potential realizations in fractional quantum Hall (FQH) systems. For example, (2)_2 contains the Ising anyon expected to exist in FQH with filling factor ν = 5/2, and (2)_3 contains the Fibonacci anyon expected to exist in ν = 12/5 FQH. We elaborate a bit more on the universality of (2)_k. It contains an anyon type, which we denote by τ, with topological charge 1/2. By iteratively fusing τ with itself, any other anyon types in this model can be produced. In this sense, τ is the most critical anyon type in (2)_k. Denote by V^τ^⊗ 3_τ the space of three τ anyons with total type equal to τ, or equivalently, the space of four type-τ anyons (with total type trivial). This space has dimension two, and hence is a qubit. Braiding of the τ anyons induces a unitary representation of the braid group B_3 on the qubit. The theorem of <cit.> states that, for all k ≥ 3, k ≠ 4,8, this representation has a dense image in U(V^τ^⊗ 3_τ), implying universal quantum computing on one qubit by braiding. In fact, <cit.> shows that for any n ≥ 3, the image of the braid group representation is dense in V^τ^⊗ n_τ [This result also holds for k=8, but n needs to be at least 5.]. These results set the theoretical foundation for universal topological quantum computing.In this paper, we focus on the qubit V^τττ_τ and study the representation from braiding,ρ_kB_3 → U(V^τττ_τ).Recall that the braid group on n strands B_n has the standard generators σ_1, ⋯,σ_n-1 (see Sec. <ref>). B_n acts on the space of n identical anyons of certain type where σ_i corresponds to a counterclockwise braiding the i-th and (i+1)-th anyon. We call σ_i^2 a double-braiding, and it corresponds to moving the i-th anyon counterclockwise around the (i+1)-th anyon and returning to its original position in the end. See Figure <ref>. More generally, any braid in the group generated by the σ_i^2 's is also called a double-braiding. Specializing to the case of one qubit V^τττ_τ in (2)_k with k ≥ 3, k ≠ 4,8, while <cit.> states that the image of B_3 under ρ_k is dense in U(V^τττ_τ), we prove that the image of the subgroup of double-braidings alone is dense in U(V^τττ_τ). Our result is of significance in several aspects. Firstly, it is mathematically stronger than the theorem of <cit.> adapted to one qubit, and may review some hidden structure in the representations of the braid group. Secondly, it generalizes the work of <cit.> where the Fibonacci anyon was shown to be universal by double-braidings, which corresponds to the case k=3 here. Moreover, using the double-braiding-universality of the Fibonacci anyon, the authors in <cit.> provided an elegant, exponentially fast algorithm to produce entangling leakage-free 2-qubit gates which are necessary for universal quantum computing on multi-qubits. With the result in the current paper, the algorithm of <cit.> can be straightforwardly adapted to other (2)_k. Thirdly, from a physical point of view, double-braidings have the advantage that at each step only one anyon needs to move, it needs to move at most to the vicinity of its nearest neighbour, and it returns to its own position immediately after the move. Hence, there is no need to track the positions of the involved anyons. It should be noted the group of double-braidings is strictly smaller than the pure braid group which consists of braids where the anyons return to their own positions eventually. It is now a well-established dictionary that 2-dimensional topological phases of matter are characterized by the structure of a unitary modular tensor category which is a braided category satisfying additional conditions. A double-braiding is also called a twine structure in the braided category introduced by <cit.>. The twine structure can be formalized and defined on non-braided categories. There exist monoidal categories without a braiding structure, but with a twine structure. An example of this is the fermionic Moore-Read fusion category <cit.>. While most FQH states are expected to fit in the framework of modular tensor categories, some do not seem so. Instead, they might be described by twine fusion categories <cit.>. Our work on double-braidings could provide insight on exploring the power of quantum computing in those systems.We conjecture that our result on the universality of double-braidings also hold for the case of of multi-qubits, i.e., the space of more than three anyons. For that generalization, the techniques utilized in this paper may not apply. Instead, it is possible to make use of the Lie-theoretical tools on the two-eigenvalue problem in <cit.>. We leave this as a future direction.The rest of the paper is organized as follows. Section <ref> gives a brief overview on the algebraic theory of anyons and the setup in topological quantum computing, with a more detailed explanation in Appendix <ref>. In Section <ref> - <ref>, we provide explicit data for (2)_k and explicit calculations of the braid group representations. Section <ref> contains the main result.§ TOPOLOGICAL QUANTUM COMPUTING WITH ANYONS In this section, we provide a very brief introduction to topological quantum computing (TQC). There are many references with more comprehensive discussions on this subject, such as <cit.>.Mathematically, an anyon model is characterized by the structure of a unitary modular tensor category (MTC). An MTC can be described either in terms of abstract categorical language or by a set ofconcrete data. We take the second approach and provide a partial set of data for the purpose of fixing the convention. See Appendix <ref> for a more detailed discussion of MTCs.An anyon model has a finite set L = {a,b,c, ⋯}consisting of all the possible anyon types in a topological phase. Each anyon type a has a dual anyon type a̅. The ground state is considered as a special trivial anyon type, usually denoted by 1.The fusion rule describes the possible outcomes when fusing two anyons. Given a, b ∈ L, we formally write,a ⊗ b = ⊕_c ∈ L N_ab^c c,where N_ab^c denotes the number of different channels of fusing a and b to result in the output c. If there is no way to obtainc from the fusion, then N_ab^c = 0.If N_ab^c > 0, we say c is a total type or total charge of a and b, and call the triple (a,b;c) admissible. For simplicity, in the following discussions we will assume N_ab^c is either 0 or 1, i.e., the anyon model is multiplicity-free. For anyon types c, a_1, ⋯, a_n, denote by V_c^a_1a_2⋯ a_n the space of states representing n anyons a_1, ⋯, a_n with total charge c. A basis for such a space can be described as follows. Choose an upward-growing binary tree with one root at the bottom and n leaves at the top. See Figure <ref> for an illustration.Label the root by c and the leaves, from left to right, by a_1, a_2, ⋯, a_n. Now label each internal edge e by an anyon type b_e such that at each fork, the relevant triple of labels are admissible. Then the binary tree with all possible labels {b_e} of internal edges form a basis ofV_c^a_1a_2⋯ a_n, called a splitting-tree basis. For each labeled binary tree, one caninterpret the state it represents as a splitting process. For example, the state represented by the tree in Figure <ref> is obtained by splitting c into b_n-2 and a_n, followed by splitting b_n-2 into b_n-3 and a_n-1, ⋯⋯, followed by splitting b_1 into b_0 = a_1 and a_2. For n=2, there is one splitting tree as shown in Figure <ref> (Right), which has no internal edges. Hence (V_c^ab) = N_ab^c. For n=3, there are two splitting trees, with one internal edge, as shown on both sides of the equation below. The basis corresponding to the tree on the left side of the equation consists of all possible labelings m of the internal edge so that (a,b;m) and (m,c;d) are both admissible. Similarly, the basis for the tree on the right side consists of labelings n of the internal edge so that (b,c;n) and (a,n;d) are both admissible. Denote the matrix change between the two bases by F_d^abc. More explicitly,[thick] (0.5,-0.5)node[below]d–(0,0) – (-1,1)node[above]a; [thick] (0,0) – (1,1)node[above]c; [thick] (0,1)node[above]b– (-0.5,0.5)node[below]m;(2,0.25) node=∑_nF^abc_d;nm;[xshift = 4cm][thick] (0,0) – (-1,1)node[above]a;[thick] (-0.5,-0.5)node[below]d–(0,0) – (1,1)node[above]c;[thick] (0,1)node[above]b– (0.5,0.5)node[below]n;where F^abc_d;nm is the (n,m)-entry of F_d^abc, and the sum is over all labelings n as described above. Note that, here the anyon types n and m are used as the indices of the entries of F_d^abc.We call F_d^abc an F-matrix, its entries F-symbols or 6j-symbols, and the identity in the above equation an F-move. The F-symbols need to satisfy the Pentagon Equations <ref>. Using F-moves one can relate arbitrary two splitting-tree bases of the state spaces.The process of swapping positions of anyons is called a braiding.A braiding induces a unitary transformation on the state space. Consider two anyons a and b with total type c. A counterclockwise braiding of a and b maps a state in V_c^ab to onein V_c^ba. Since both spaces have dimension one, there exists a phase R^ba_c such that the following equality holds,[thick](0,0) – (-0.5,0.5)node[left]a;(0,0) – (0.5,0.5)node[right]b;(0,0) – (0,-0.5)node[below]c; (br) at (-0.5, 2) s_1^-1;(br-1-s) node[left]b;(br-2-s) node[right]a;(1.5,0.5) node=;(2.5,0.5) nodeR^ba_c;[xshift = 3.5cm](0,0) – (-0.5,0.5) – (-0.5, 2)node[left]b;(0,0) – (0.5,0.5) – (0.5, 2)node[right]a;(0,0) – (0,-0.5)node[below]c;The above equality is called an R-move, and R^ba_c is called an R-symbol. The R-symbols need to satisfy the Hexagon Equations <ref> <ref>. The set of F- and R-symbols are crucial for calculations in the anyon model.Now, we discuss how to perform quantum computing with anyons. The state space of multi-anyons is the logical space to store information. Typically, one chooses multiple identical anyons, say n type-a anyons with total type c for some n. Denote this space by V_c^a^⊗ n:= V_c^aa⋯ a. The type a needs to be non-Abelian so that the dimension of V_c^a^⊗ n approaches to infinity as n →∞. In general, V_c^a^⊗ n does not have a natural tensor product structure, and we need to choose a subspace which does have a tensor product structure as multi-qubits or multi-qudits. The computational basis for the qudits can be chosen to be any splitting-tree basis.Braiding of anyons induces a representation of the braid group and acts as quantum gates on the logical space V_c^a^⊗ n. Recall that the n-strand braid group B_n has a presentation asB_n = ⟨σ_1, ⋯, σ_n-1 |σ_iσ_i+1σ_i = σ_i+1σ_iσ_i+1, σ_iσ_j = σ_jσ_i, |i-j|>1 ⟩,where σ_i (resp. σ_i^-1) corresponds to the braid diagram in Figure <ref>(Left) (resp. (Right) ). We have a representation,ρ B_n → U(V_c^a^⊗ n),where σ_i (resp. σ_i^-1) acts on V_c^a^⊗ n by counterclockwise (resp. clockwise) braiding of the i-th with the (i+1)-th anyon. With a chosen splitting-tree basis, a matrix for each braid generator can be computed using F- and R-symbols. The set of quantum gates obtained from braiding is the image of this braid group representation. § UNIVERSALITY IN (2)_K ANYONS§.§ (2)_k anyonsRecall that the SU(2)Witten-Chern-Simons theory at level k ∈ℤ_≥ 0 is denoted by (2)_k. The MTC corresponding to (2)_k is constructed from finite-dimensional representations of the quantum group U_q(sl_2) for q = e^2π i/k+2. It also describes the Wess–Zumino–Witten conformal field theory <cit.>. Below we describe the data of the (2)_k MTC obtained from Sec. 5.4 of <cit.>, though the data also appeared in earlier literature in different forms. The MTC has k+1 anyon types (i.e. simple objects) labeled by half-integers 0, 1/2, 2/2, …, k/2, where 0 denotes the trivial anyon type. The fusion rule is given by,j_1⊗ j_2=⊕_j=|j_1-j_2|^min{j_1+j_2,k-j_1-j_2} j,where j has an increment of 1 in the above sum, implying that for admissible (j_1, j_2;j), j_1 + j_2 + j is always an integer. Throughout the context, fix q = e^2π i/k+2, and denote by q^x:= e^2π i x/k+2. For n ∈ℤ_≥ 0, the quantum integer [n] and the quantum factorial are defined by,[n]_q:= q^n/2 - q^-n/2/q^1/2 - q^-1/2,[n]_q! = ([n-1]_q!) [n],[0]_q! = 1.The R-symbols are given by,R_j^j_1,j_2=(-1)^j-j_1-j_2q^1/2(j(j+1)-j_1(j_1+1)-j_2(j_2+1)).The F-symbols are given by,F_j;j_12,j_23^j_1,j_2,j_3 = [F_j^j_1,j_2,j_3]_j_12,j_23=(-1)^j_1+j_2+j_3+j√([2j_12+1]_q [2j_23+1]_q){ j_1 j_2 j_12 j_3 j j_23}_q,where{ j_1 j_2 j_12 j_3 j j_23}_q=Δ(j_1,j_2,j_12) Δ(j_12,j_3,j) Δ(j_2,j_3,j_23) Δ(j_1,j_23,j)× ∑_z{(-1)^z[z+1]_q!/[z-j_1-j_2-j_12]_q![z-j_12-j_3-j]_q![z-j_2-j_3-j_23]_q![z-j_1-j_23-j]_q! ×1/[j_1+j_2+j_3+j-z]_q![j_1+j_12+j_3+j_23-z]_q![j_2+j_12+j+j_23-z]},where the sum is over z with an increment of 1 for which all the [·]_q! in the sum are defined, and Δ(j_1,j_2,j_3)=√([-j_1+j_2+j_3]_q![j_1-j_2+j_3]_q![j_1+j_2-j_3]/[j_1+j_2+j_3+1]_q!).A fact that will not be used in this paper is that a close cousin of the (2)_k MTC is the Temperley-Lieb-Jones MTC obtained from skein theory <cit.>. Under a proper translation between the level k and the Kauffman variable A in the skein theory, the (2)_k MTC and the Temperley-Lieb-Jones MTC are equivalent as braided fusion categories, but differ by a ribbon twist.§.§ The 1-qubit modelFor k ≥ 2, we consider the anyon type labeled by τ:= 1/2 in the (2)_k model.For k = 2, τ is the Ising anyon, while for k = 3, τ is closely related to the Fibonacci anyon [It is the composite of the Fibonacci anyon with a semion.]. There are two standard ways to obtain a qubit using τ anyons. They are the dense encoding and the sparse encoding corresponding to the spaces V^τττ_τ and V^ττττ_0, respectively. That is, the dense encoding takes, as a qubit, the space of three τ anyons with total type τ, while the sparse encoding takes the space of four τ anyons with total type 0. From the fusion rule,1/2⊗1/2 = 0 ⊕ 1, 1/2⊗ 1 = 1/2⊕3/2,0 ⊗ j = j,both V^τττ_τ and V^ττττ_0 are of two dimensions with a splitting-tree basis given in Figure <ref>. Denote the splitting-tree basis in Figure <ref> (Left) by {|x⟩:x = 0, 1} and the one in Figure <ref> (Right) by {|x⟩':x = 0, 1}.The braiding of the τ anyons induces representations of the braid groups with the action of the generator σ_i given by the counter-clockwise swap of the i-th and the (i+1)-th anyon,ρ_kB_3 → U(V^τττ_τ), ρ_k' B_4 → U(V^ττττ_0). For the dense encoding, under the basis {|x⟩:x = 0, 1} of V^τττ_τ, the action of the generators σ_1 and σ_2 are computed in Figure <ref> and Figure <ref>, respectively. Hence,ρ_k(σ_1)|x⟩ = R^ττ_x|x⟩ ρ_k(σ_2)|x⟩ = ∑_y,z ∈{0,1}F^τττ_τ;yx R^ττ_y (F^τττ_τ)^-1_zy |z⟩.Denote by,R =[ R^ττ_00;0 R^ττ_1;],F = F^τττ_τ =[ F^τττ_τ;00 F^τττ_τ;01; F^τττ_τ;10 F^τττ_τ;11;].From the data in Section <ref>, R =[ -q^-3/4 0; 0 q^1/4; ],F = F^τττ_τ = √(q)/q+1( [ -1 √(q+1/q+1); √(q+1/q+1)1;]).Note that F is a symmetric, real, involutory matrix. Then we haveρ_k(σ_1) = R =[ -q^-3/4 0; 0 q^1/4; ], ρ_k(σ_2) = F^-1RF =q^1/4/1+q( [q √(q+1/q+1); √(q+1/q+1) -1/q;]). For the sparse encoding, we show in fact the image ρ_k'(B_4) is the same as ρ_k(B_3) if we identify |x⟩' ∈ V^ττττ_0 with |x⟩∈ V^τττ_τ. Under this identification, it is clear that ρ_k'(σ_1) = ρ_k'(σ_3) = ρ_k(σ_1), while ρ_k'(σ_2) can be computed, similar to ρ_k(σ_2) in Figure <ref>, as,ρ_k'(σ_2)|x⟩' = (F^xττ_0)^-1_τ x (∑_y,z ∈{0,1}F^τττ_τ;yx R^ττ_y (F^τττ_τ)^-1_zy)F^zττ_0;zτ |z⟩'.From the F-symbols of (2)_k in Section <ref>, it can be checked that F^j_1j_2j_3_j; j_23j_12 = 1 whenever j = 0 and the involved labels are admissible. Hence we haveρ_k'(σ_2) = ρ_k(σ_2). This shows the equality of ρ_k'(B_4) and ρ_k(B_3).Therefore, from now on, we will focus on the dense encoding only. A logical qubit is given by the space V^τττ_τ whose computational basis is {|0⟩, |1⟩} as shown in Figure <ref> (Left). The set of 1-qubit logical gates obtained from anyon braidings correspond to elements in the image ρ_k(B_3). Since quantum gates are well defined only up to global U(1) phases, the gates in ρ_k(B_3) can be multiplied by any phase, or they should be considered as elements of the projective unitary group PU(V^τττ_τ). Let V be a Hilbert space, and 𝒢 be a subset of the unitary group U(V). 𝒢 is said to be universal on V if 𝒢∪ U(1) generate a dense subgroup of U(V). To study the universality of the braiding gates, it is convenient to normalize the generators of B_3 so that the image of ρ_k lies in SU(V^τττ_τ). Explicitly, multiplying -i q^1/4 to the generators, we obtain the normalized representation ρ̃_k: B_3 → SU(V^τττ_τ),ρ̃_k(σ_1) = R̃ =[ i q^-1/20;0 -i q^1/2;], ρ̃_k(σ_2) = F^-1R̃ F = i√(q)/q+1( [-q -√(q+1/q+1); -√(q+1/q+1) 1/q; ]).Since ρ_k equalsρ̃_k up to a global phase, we will use ρ_kand ρ̃_k interchangeably. As an example, when k = 2, ρ̃_2(σ_1) and ρ̃_k(σ_2) are given, respectively, by,e^π i/4 ( [10;0 -i;]) and1/√(2) ( [1 -i; -i1;]),which, together with U(1) phases, generate the 1-qubit Clifford group.The multi-qubitmodels can be obtained by increasing the number of anyons utilized. We will not discuss that direction. The universality of braiding the τ anyons is a classic problem settled in <cit.>. We rephrase Theorem 4.1 of <cit.> in the setup of one qubit.For any integer k ≥ 3, k ≠ 4, 8, let V^τττ_τ be the 1-qubit space in the (2)_k model. Then set of braiding gates corresponding to the image of the representation ρ̃_k defined in Equations <ref> <ref>is universal on V^τττ_τ.§.§ Universality of double-braiding in (2)_kIn this section, we prove a stronger result than Theorem <ref>. That is, for those values of k in Theorem <ref>, we show that the set of double-braiding gates on V^τττ_τ is universal in the (2)_k model. By a double-braiding is meant a braid generated by even powers of the standard generators σ_i's of the braid group. We will rely on two critical results. The content of Lemma <ref> below first appeared in the work of Kitaev <cit.> where he used the lemma to prove the universality of certain gate sets. A proof of the lemma can be found in <cit.>.Let A and B be two non-commuting elements of SU(2) both of which are of infinite order. Then A and B generate a dense subgroup of SU(2).Suppose we have at most four distinct rational multiples of π lying strictly between 0 and π/2 for which some rational linear combination of their cosines is rational but no proper subset has this property. Then the appropriate linear combination is proportional to one from the following list: cosπ/3=1/3, -cosϕ+cos(π/3-ϕ)+cos(π/3+ϕ)=0(0<ϕ<π/6), cosπ/5-cos2π/5=1/2,cosπ/7-cos2π/7+cos3π/7=1/2,cosπ/5-cosπ/15+cos4π/15=1/2,-cos2π/5+cos2π/15-cos7π/15=1/2,cosπ/7+cos3π/7-cosπ/21+cos8π/21=1/2,cosπ/7-cos2π/7+cos2π/21-cos5π/21=1/2,-cos2π/7+cos3π/7+cos4π/21+cos10π/21=1/2,-cosπ/15+cos2π/15+cos4π/15-cos7π/15=1/2. The following is the main result of the paper.For any integer k ≥ 3, k ≠ 4, 8, let τ be the anyon of type 1/2 in the anyon model (2)_k, andρ_k: B_3 → U(V^τττ_τ) be the representation of B_3 on the dense-encoding 1-qubitV^τττ_τ. Then the images of σ_1^2 and σ_2^2 under ρ_k, together with phases, generate a dense subgroup of U(V^τττ_τ). That is, the double-braiding gates alone are universal.It suffices to show, for the the normalized ρ̃_k: B_3 → U(V^τττ_τ) defined in Equations <ref> <ref>, ρ̃_k(σ_1^2) and ρ̃_k(σ_2^2) generate a dense subgroup of SU(V^τττ_τ).LetA := ρ̃_k(σ_1^2σ_2^4) = R̃^2 F^-1R̃^4 F,B := ρ̃_k(σ_1^2σ_2^6) = R̃^2 F^-1R̃^6 F.From the expressions of R̃ (Equation <ref>) and F (Equation <ref>), the matrices of A, B, and W:= ABA^-1B^-1 can be calculated as (See Appendix <ref> for a Mathematica implementation),A =( [ -q^4+q^2-q+1/q^3+q^2 -√(q+1/q+1)(q^3-q^2+q-1)/q^2 (q+1); -√(q+1/q+1)(q^3-q^2+q-1)/q+1-q^5+q^2+q+1/q(q+1)^2;]) B =( [ q^7+q^6+q^5+1/q^3 (q+1)^2 √(q+1/q+1)(q^5-q^4+q^3-q^2+q-1)/q^3 (q+1); √(q+1/q+1)(q^5-q^4+q^3-q^2+q-1)/q (q+1) q^6+q+1/q+1/q (q+1)^2; ]) W_11 =q^13-3 q^12+6 q^11-11 q^10+16 q^9-19 q^8+22 q^7-21 q^6+19 q^5-14 q^4+10q^3-6 q^2+3 q-1/q^6 (q+1)W_12 = -(q-1)^2 √(q+1/q+1)(q^10-q^9+3 q^8-4 q^7+5 q^6-6 q^5+6 q^4-5q^3+4 q^2-2 q+1)/q^7 (q+1)W_21 = (q-1)^2 √(q+1/q+1)(q^10-2 q^9+4 q^8-5 q^7+6 q^6-6 q^5+5 q^4-4q^3+3 q^2-q+1)/q^4 (q+1)W_22 = -q^13+3 q^12-6 q^11+10 q^10-14 q^9+19 q^8-21 q^7+22 q^6-19 q^5+16 q^4-11q^3+6 q^2-3 q+1/q^7+q^6We will show that, for the values of k in the statement of the theorem, A and B are both of infinite order and they do not commute. Then by Lemma <ref>, A and B generate a dense subgroup of SU(2), implying the validity of the theorem. The rest of the proof is devoted to verifying the assumptions on A and B mentioned above.Proving 𝐀 has infinite order. Denote the eigenvalues of A by e^±θ_k i, 0 ≤θ_k ≤π. It suffices to show θ_k is not a rational multiple of π. Recall that q = e^2π i/k+2. We have,2cosθ_k = tr(A)=-((q-1) q+1) (q^2+1)/q^2= -2 - (q^2 + 1/q^2) + (q + 1/q)= -2 - 2cos4 π/k+2 + 2cos2 π/k+2.That is,cos4π/k+2-cos2π/k+2+cosθ_k= -1.We wish to show the above identity is not equivalent to any of the identities in Theorem <ref>, and hence θ_k cannot be a rational multiple of π. But we need to first verify the assumptions in that theorem. We make four statements which can be verified by checking small values of k and applying Theorem <ref>, noting that when k > 6, 2π/k+2 and 4π/k+2lie strictly between 0 and π/2. Assume k ≥ 3. * The only k for which cos2π/k+2 is rational is k=4,cos2π/4+2 = 1/2. * The only k's for which cos4π/k+2 is rational are k=4, 6, 10,cos4π/4+2 = -1/2, cos4π/6+2 = 0, cos4π/10+2 = 1/2. * The only k's for which neither cos2π/k+2 nor cos4π/k+2 is rational but certain non-trivial rational combinations of them is rational are k =3, 8,-cos2π/3+2 - cos4π/3+2 = 1/2, cos2π/8+2 - cos4π/8+2 = 1/2.* Furthermore, the only k's for which cosθ_k is rational are k = 4, 8,cosθ_4 = 0, cosθ_8 = -1/2.Statement D) implies that for k = 4, 8, θ_k is a rational multiple of π. For all other k's, θ_k is not a rational multiple of π and in particular θ_k ≠ 0, π/2, π.The above statements also imply for k ≥ 7, k ≠ 8, 10, none of cos4π/k+2, cos2π/k+2, or cosθ_k is rational.Furthermore, for those values of k, for any two elements from {cos4π/k+2, cos2π/k+2, cosθ_k}, they cannot have a non-trivial rational combination which is rational. The above claim for the pair {cos4π/k+2, cos4π/k+2} is clear from Statement C). For the other two pairs, say, {cos4π/k+2, cosθ_k}, if they do not satisfy the claim, then one can substitute cosθ_k with an expression of cos4π/k+2 in Equation <ref>, yielding a rational combination of cos4π/k+2 and cos2π/k+2 which is rational. That is a contradiction.Then for k ≥ 7, k ≠ 8, 10, if θ_k is a rational multiple of π strictly between 0 and π/2, then Equation <ref> is an identity concerning a rational combination of the cosine of three distinct angles satisfying the conditions in Theorem <ref>. However, this identity is not equivalent to any of those in that theorem, a contradiction. If θ_k is a rational multiple of π strictly between π/2 and π, then Equation <ref> can be written as,cos4π/k+2-cos2π/k+2-cos(π -θ_k)= -1.Applying the same argument to the new equation lead to a similar contradiction. This shows that for k ≥ 7, k ≠ 8, 10, θ_k is not arational multiple of π.The remaining cases to check are k = 3, 5, 6, 10. The corresponding identity of Equation <ref> for these values of k can be simplified below.cosθ_3 = √(5)-2/2, cosθ_5 = cos3π/7+cos2π/7 -1, cosθ_6 = √(2)-2/2, cosθ_10 = √(3)-3/2.It can be checked directly by a computer program or by applying Theorem <ref> that the above θ_k's are not rational multiple of pi. This completes the proof that the eigenvalues of A are of infinite order for k ≥ 3, k ≠ 4, 8.Proving 𝐁 has infinite order. Denote the eigenvalues of B by e^±θ_k i, 0 ≤θ_k ≤π. 2cosθ_k = tr(B)= (q^2+1) ((q-1) q (q^2+1)+1)/q^3= -2 + (q^3 + 1/q^3) - (q^2 + 1/q^2) + 2(q + 1/q)= -2 + 2cos6 π/k+2 - 2cos4 π/k+2 + 4cos2 π/k+2.That is,2cos2 π/k+2- cos4 π/k+2 + cos6 π/k+2 - cosθ_k= 1.The rest of the proof is completely similar to the case of the matrix A by repeatedly applying Theorem <ref>. We leave the details as an exercise for curious readers. The proof is completely similarly to the case of A, but we still present the details for completeness. Impatient readers can skip this part if they prefer. Denote the eigenvalues of B by e^±θ_k i, 0 ≤θ_k ≤π. 2cosθ_k = tr(B)= (q^2+1) ((q-1) q (q^2+1)+1)/q^3= -2 + (q^3 + 1/q^3) - (q^2 + 1/q^2) + 2(q + 1/q)= -2 + 2cos6 π/k+2 - 2cos4 π/k+2 + 4cos2 π/k+2.That is,2cos2 π/k+2- cos4 π/k+2 + cos6 π/k+2 - cosθ_k= 1. We make the following statements which can be verified by checking small values of k and applying Theorem <ref>, noting that when k > 10, 2π/k+2, 4π/k+2, and 6π/k+2lie strictly between 0 and π/2. Assume k ≥ 3. * The only k for which cos2π/k+2 is rational is k=4,cos2π/4+2 = 1/2.* The only k's for which cos4π/k+2 is rational are k=4, 6, 10,cos4π/4+2 = -1/2, cos4π/6+2 = 0, cos4π/10+2 = 1/2.* The only k's for which cos6π/k+2 is rational are k=4, 7, 10, 16,cos6π/4+2 = -1, cos6π/7+2 = -1/2, cos6π/10+2 = 0, cos6π/16+2 = 1/2.* The only k's for which neither cos2π/k+2 nor cos4π/k+2 is rational but certain non-trivial rational combinations of them is rational are k =3, 8,-cos2π/3+2 - cos4π/3+2 = 1/2, cos2π/8+2 - cos4π/8+2 = 1/2.* Furthermore, the only k's for which cosθ_k is rational are k = 4, 8,cosθ_4 = 0, cosθ_8 = -1/2.Statement D) implies that for k = 4, 8, θ_k is a rational multiple of π. For all other k's, θ_k is not a rational multiple of π and in particular θ_k ≠ 0, π/2, π.The above statements also imply for k ≥ 7, k ≠ 8, 10, none of cos4π/k+2, cos2π/k+2, or cosθ_k is rational.Furthermore, for those values of k, for any two elements from {cos4π/k+2, cos2π/k+2, cosθ_k}, they cannot have a non-trivial rational combination which is rational. The above claim for the pair {cos4π/k+2, cos4π/k+2} is clear from Statement C). For the other two pairs, say, {cos4π/k+2, cosθ_k}, if they do not satisfy the claim, then one can substitute cosθ_k with an expression of cos4π/k+2 in Equation <ref>, yielding a rational combination of cos4π/k+2 and cos2π/k+2 which is rational. That is a contradiction.Then for k ≥ 7, k ≠ 8, 10, if θ_k is a rational multiple of π strictly between 0 and π/2, then Equation <ref> is an identity concerning a rational combination of the cosine of three distinct angles satisfying the conditions in Theorem <ref>. However, this identity is not equivalent to any of those in that theorem, a contradiction. If θ_k is a rational multiple of π strictly between π/2 and π, then Equation <ref> can be written as,cos4π/k+2-cos2π/k+2-cos(π -θ_k)= -1.Applying the same argument to the new equation lead to a similar contradiction. This shows that for k ≥ 7, k ≠ 8, 10, θ_k is not arational multiple of π.The remaining cases to check are k = 3, 5, 6, 10. The corresponding identity of Equation <ref> for these values of k can be simplified below.cosθ_3 = √(5)-2/2, cosθ_5 = cos3π/7+cos2π/7 -1, cosθ_6 = √(2)-2/2, cosθ_10 = √(3)-3/2.It can be checked directly by a computer program or by applying Theorem <ref> that the above θ_k's are not rational multiple of pi. This completes the proof that the eigenvalues of A are of infinite order for k ≥ 3, k ≠ 4, 8.Proving 𝐖 ≠ 𝐈.Note that W ≠ I if and only if Tr(W) ≠ 2. Assume Tr(W) = 2. Then,2= -((q-1) q+1) (q^4-2 q^3-2 q+1)/q^3= 4 - (q^3 + 1/q^3) + 3 (q^2 + 1/q^2) - 3(q + 1/q)= 4 - 2cos6 π/k+2 + 6cos4 π/k+2 - 6cos2 π/k+2.That is,cos6 π/k+2 - 3 cos4 π/k+2 + 3 cos2 π/k+2 = 1.That the above identity is fake can be checked directly for small values k ≤ 10 and by Theorem <ref> for k ≥ 11.§ ANYON MODELSMathematically, an anyon model is characterized by the structure of a unitary modular tensor category (MTC). To avoid abstract categorical language, we describe an MTC with a set of concrete data. Label set. Associated with each anyon model is a finite set L = {a,b,c, ⋯}consisting of all the possible anyon types in a topological phase. The ground state is considered as a special trivial anyon type, and is usually denoted by 1∈ L. For each anyon type x ∈ L, there exists x̅∈ L corresponding to the anti-particle (i.e., the dual anyon type) of x. We require that 1̅ = 1 and x̅̅̅ = x. Fusion rule. For a,b ∈ L, fusing a and b produces different possible anyon types. Formally, it is written as,a ⊗ b = ⊕_c ∈ L N_ab^c c,where N_ab^c denotes the number of different channels of fusing a and b to result in the output c. If there is no way to obtainc from the fusion, then N_ab^c = 0. One can also view the equality in the above equation from an alternative perspective. Namely, the composite type of a and b is a superposition of all possible anyon types with each type c appearing in N_ab^c copies. If N_ab^c > 0, we say c is a total type or total charge of a and b, and call the triple (a,b;c) admissible. The collection of the integers {N_ab^c | a,b,c ∈ L} is called the fusion rule. The fusion rule should satisfy the following requirement. * The fusion rule is commutative, i.e., a ⊗ b = b ⊗ a, implying N_ab^c = N_ba^c, ∀ a,b,c. * The dual of a ⊗ b as a composite equals b̅⊗a̅, implyingN_ab^c = N_b̅a̅^c̅, ∀ a,b,c.* 1⊗ a = a, implyingN_1 a^b = δ_a,b, ∀ a,b.* 1 is a total type of a and b if and only if a = b̅, implyingN_ab^1 = δ_a, b̅, ∀ a,b.* The fusion rule is associative, i.e., (a ⊗ b) ⊗ c = a ⊗ (b ⊗ c), implying∑_p ∈ L N_ab^p N_pc^d = ∑_q ∈ L N_aq^d N_bc^q, ∀ a,b,c,d,For simplicity, in the following discussions we will assume N_ab^c is either 0 or 1, i.e., the anyon model is multiplicity-free. This already covers a large family of anyon models including the ones considered in the current paper. State space. For anyon types c, a_1, ⋯, a_n, denote by V_c^a_1a_2⋯ a_n the space of states representing n anyons a_1, ⋯, a_n with total charge c. The dimension and bases of these spaces are described inductively as follows. For n = 1, V_c^a_1 = 0 if c ≠ a_1, and V_c^c is 1-dimensional with a canonical basis denoted by the left diagram of Figure <ref>. One can think of the diagram representing the state obtained by `doing nothing' to an existent anyon c. For n = 2, V_c^a_1a_2 = 0 if N_a_1a_2^c = 0, and V_c^a_1a_2 is 1-dimensional otherwise, with a non-canonical basis denoted by the right diagram in Figure <ref>. One can then think of the diagram representing the state obtained from the process of splitting c into the pair a and b. This choice of basis is not canonical as one can multiply an arbitrary phase to it. We will use the diagrams in Figure <ref> as building blocks to describe bases of multi-anyon spaces.For n ≥ 3, take an upward-growing binary tree with one root at the bottom and n leaves at the top. See Figure <ref> for an illustration. It is to be understood that the tree is constructed using the two diagrams in Figure <ref>. Label the root by c and the leaves, from left to right, by a_1, a_2, ⋯, a_n. Now label each internal edge e by an anyon type b_e such that at each fork, the relevant labels are admissible. Then the binary tree with all possible labels {b_e} of internal edges form a basis ofV_c^a_1a_2⋯ a_n. For each labeled binary tree, one can similarly interpret the state it represents as a splitting process. For example, the state represented by the tree in Figure <ref> is obtained by splitting c into b_n-2 and a_n, followed by splitting b_n-2 into b_n-3 and a_n-1, ⋯⋯, followed by splitting b_1 into b_0 = a_1 and a_2. Such a basis is called a splitting-tree basis. For the case of n=3, there are exactly two such binary trees as shown on both sides of Equation <ref>, each of which provides a basis of V_d^abc. Each tree has one internal edge. The basis corresponding to the tree on the left side of the equation consists of all possible labelings m of the internal edge so that (a,b;m) and (m,c;d) are both admissible. Similarly, the basis for the tree on the right side consists of labelings n of the internal edge so that (b,c;n) and (a,n;d) are both admissible. Denote the matrix change between the two bases by F_d^abc. More explicitly,[thick] (0.5,-0.5)node[below]d–(0,0) – (-1,1)node[above]a; [thick] (0,0) – (1,1)node[above]c; [thick] (0,1)node[above]b– (-0.5,0.5)node[below]m;(2,0.25) node=∑_nF^abc_d;nm;[xshift = 4cm][thick] (0,0) – (-1,1)node[above]a;[thick] (-0.5,-0.5)node[below]d–(0,0) – (1,1)node[above]c;[thick] (0,1)node[above]b– (0.5,0.5)node[below]n;where F^abc_d;nm is the (n,m)-entry of F_d^abc, and the sum is over all labelings n as described above. Note that, here the anyon types n and m are used as the indices of the entries of F_d^abc.We call F_d^abc an F-matrix, its entries F-symbols or 6j-symbols, and the identity in Equation <ref> an F-move. Using the F-move or its inverse, we can relate any two splitting-tree bases of V_c^a_1a_2⋯ a_n. For V_e^abcd, there are exactly five splitting-tree bases, and Figure <ref> shows the F-moves connecting them. In particular, starting from the basis labeled by 1, there are two ways of performing F-moves to obtain the basis labeled by 3, namely, either via the path 1→2→3 or via the path 1→5→4→3. Since both ways induce a basis change between 1 and 3, this introduces some constraints on the F-symbols, namely,F^mcd_e;znF^abz_e;ym = ∑_x ∈ L F^abc_n;xmF^axd_e;ynF^bcd_y;zx, ∀ a,b,c,d,e,m,n,y,z. Equation <ref> is known as the Pentagon Equations. It is a non-trivial fact that the Pentagon Equations guarantee that the change between splitting-tree bases via F-moves for an arbitrary state space is consistent. Braiding. The process of swapping positions of anyons is called a braiding. Since anyonslive in 2-dimensional space, a counterclockwise braiding has a different world-line from that of a clockwise braiding. The world-line of a sequence of braidings of multi-anyons is a braid diagram, and hence the naming of the process. A braiding induces a unitary transformation on the state space. Consider two anyons a and b with total type c. A counterclockwise braiding of a and b maps a state in V_c^ab to onein V_c^ba. Since both spaces have dimension one, there exists a phase R^ba_c such that the following equality holds,[thick](0,0) – (-0.5,0.5)node[left]a;(0,0) – (0.5,0.5)node[right]b;(0,0) – (0,-0.5)node[below]c; (br) at (-0.5, 2) s_1^-1;(br-1-s) node[left]b;(br-2-s) node[right]a;(1.5,0.5) node=;(2.5,0.5) nodeR^ba_c;[xshift = 3.5cm](0,0) – (-0.5,0.5) – (-0.5, 2)node[left]b;(0,0) – (0.5,0.5) – (0.5, 2)node[right]a;(0,0) – (0,-0.5)node[below]c;The above equality is called an R-move, and {R^ab_c} is called an R-symbol.Since a counterclockwise braiding followed by a clockwise braiding is equivalent to the identity process, we have,[thick](0,0) – (-0.5,0.5)node[left]a;(0,0) – (0.5,0.5)node[right]b;(0,0) – (0,-0.5)node[below]c;(br) at (-0.5, 2) s_1;(br-1-s) node[left]b;(br-2-s) node[right]a;(1.5,0.5) node=;(2.5,0.5) node(R^ab_c)^-1;[xshift = 4cm](0,0) – (-0.5,0.5) – (-0.5, 2)node[left]b;(0,0) – (0.5,0.5) – (0.5, 2)node[right]a;(0,0) – (0,-0.5)node[below]c; Consider the space V_d^abc, and braid a with b and c. Each node in Figure <ref> represents a basis of V_d^bca, and performing an F-move or R-move change from one basis to another. To change from basis 1 to basis 3, one can follow eitherthe path 1→2→3 or the path 1→6→5→4→3. Consequently, we obtain the Hexagon Equation,R^ba_mF^bac_d;nmR^ca_n = ∑_x ∈ L F^abc_d;xmR^xa_dF^bca_d;nx.By replacing the counterclockwise braidings in Figure <ref> with clockwise braidings, we obtain another Hexagon Equation,(R^ab_m)^-1F^bac_d;nm(R^ac_n)^-1 = ∑_x ∈ L F^abc_d;xm(R^ax_d)^-1F^bca_d;nx. Topological spin. Each anyon type a has an (intrinsic) topological spin θ_a which is always a root of unity. The type a is said to be bosonic if θ_a = 1, fermionic if θ_a = -1, and semionic if θ_a = i. The topological spins are required to satisfy the following conditions. * The trivial anyon is bosonic,θ_1 = 1. * An anyon and its dual have equal topological spin,θ_a = θ_a̅, ∀ a ∈ L. * Whenever c is a total type of a and b, we haveθ_c θ_a^-1θ_b^-1 = R^ab_cR^ba_c.Quantum dimension. For each anyon type a, define an |L| × |L| matrix N_a whose (b,c)-entry is N_ab^c = N_ba^c. Hence the entries of N_a are non-negative integers. By the Perron-Frobenius theorem, N_a has an eigenvalue (a), called the Frobenius-Perron dimension of a, which is greater than or equal to, in absolute value, any other eigenvalues. In the anyon model, we also call (a) the quantum dimension of a. To get a sense of what (a) measures, consider the dimension of the space of n type-a anyons with total type 1. We use the splitting-tree basis in Figure <ref> with a = a_1 = ⋯ = a_n,1 = c to compute the dimension for large n,∑_b_1, ⋯, b_n-2 N_aa^b_1N_b_1a^b_2⋯ N_b_n-3a^b_n-2N_b_n-2a^1 = ∑_b_1, ⋯, b_n-3 N_aa^b_1N_b_1a^b_2⋯ N_b_n-3a^a̅= ((N_a)^n-2)_aa̅n →∞∼ (a)^n-2.Thus, (a) measures the asymptotic size of the space of n type-a anyons. Apparently, (a) ≥ 1. An anyon a is called Abelian if (a) = 1, and non-Abelian otherwise. S-matrix. Define the |L| × |L| modular S-matrix with entries,S_ab:= θ_a^-1θ_b^-1∑_c ∈ L N_a̅b^cθ_c (c).The S-matrix is required to be invertible.To summarize, an anyon model or a unitary MTC is described by a label set, fusion rule, F-symbols, R-symbols, and topological spins, from which one can derive quantum dimensions and the S-matrix. These data should satisfy various compatibility conditions as listed in this section.§ MATHEMATICA CODEBelow we list the Mathematica code to compute some of the matrices used in Section <ref> and <ref> including F, R̃, A, B, and W.[language = Mathematica,]Clear[q]; QuantumInteger[n_] := (q^(n/2) - q^(-n/2))/(q^(1/2) - q^(-1/2)); F = FullSimplify[-1,Sqrt[QuantumInteger[3]], Sqrt[QuantumInteger[3]], 1/ QuantumInteger[2]]; Rtilde = DiagonalMatrix[q^(-1/2), -q^(1/2)]*I; A = Simplify[MatrixPower[Rtilde, 2] . F . MatrixPower[Rtilde, 4] . F]; B = Simplify[MatrixPower[Rtilde, 2] . F . MatrixPower[Rtilde, 6] . F]; W = Simplify[A . B . Inverse[A] . Inverse[B]];Acknowledgment.S.X.C is partly supported by NSF grant CCF-2006667, Quantum Science Center (led by ORNL), and ARO MURI.plain | http://arxiv.org/abs/2312.16747v1 | {
"authors": [
"Adrian L. Kaufmann",
"Shawn X. Cui"
],
"categories": [
"quant-ph",
"math-ph",
"math.MP",
"math.QA"
],
"primary_category": "quant-ph",
"published": "20231227234922",
"title": "Universal topological quantum computing via double-braiding in SU(2) Witten-Chern-Simons theory"
} |
^1Pitaevskii BEC Center, INO-CNR and Dipartimento di Fisica, Università di Trento I-38123 Trento, Italy ^2JEIP, USR 3573 CNRS, Collège de France, PSL Research University, 11 Place Marcelin Berthelot,F-75321 Paris, FranceWe theoretically study how the peculiar properties of the vacuum state of an ultra-strongly coupled system can affect basic light-matter interaction processes. In this unconventional electromagnetic environment, an additional emitter no longer couples to the bare cavity photons, but rather to the polariton modes emerging from the ultra-strong coupling, and the effective light-matter interaction strength is sensitive to the properties of the distorted vacuum state. Different interpretations of our predictions in terms of modified quantum fluctuations in the vacuum state and of radiative reaction in classical electromagnetism are critically discussed. Whereas our discussion is focused on the experimentally most relevant case of intersubband polaritons in semiconductor devices, our framework is fully general and applies to generic material systems. Light-matter interactions in the vacuum of ultra-strongly coupled systems Daniele De Bernardis^1,2, Gian Marcello Andolina^2, and Iacopo Carusotto^1 January 14, 2024 ==============================================================================The non-empty nature of the quantum vacuum is among the most fascinating effects emerging from quantum mechanics and quantum field theories <cit.>. Fundamental effects of atomic physics such as the Lamb shift <cit.>, the spontaneous emission <cit.>, and the vacuum-field Rabi oscillations <cit.>can be traced back to the quantum fluctuations of the electromagnetic field in the vacuum state.However, the quantum vacuum reveals itself also at a more macroscopic scale, for instance through the Casimir forces <cit.>, bringing the idea that this intriguing feature of quantum physics can be exploited for nano-manipulation and nano-mechanical devices <cit.>.In the last years, the physics of the quantum vacuum attracted a growing interest also from the point of view of condensed matter physics, as innovative ways to manipulate the basic interaction mechanisms and induce new states of quantum matter have been claimed <cit.>. A crucial ingredient here is the capability to reach the ultrastrong coupling (USC) regime of light-matter interactions <cit.> where the extremely large value of the coupling strength of polarizable emitters to the electromagnetic field leads to a significant distortion of the properties of the electromagnetic vacuum and its quantum fluctuations.In this Letter, we give a new twist to this research by investigating how the peculiar properties of the USC vacuum affect basic light-matter interaction processes involving another emitter used as a probe. Experimentally observable consequences of this physics are highlighted, such as marked modifications of the vacuum-field Rabi oscillations and of the spontaneous emission rate. While these features are naturally understood as an experimentally accessible evidence of the distortedquantum vacuum state, alternative interpretationsbased on classical electromagnetism, fluctuation-dissipation theorems, and electrostatic forces are also proposed and critically discussed.Modulo straightforward modifications, our framework applies to generic systems where the USC can be achieved <cit.>, from cyclotron excitations in metallic resonators <cit.>, to Josephson-junction-based devices coupled to superconducting microwave resonators <cit.>, excitons in 2D materials <cit.>. For the sake of concreteness, in this Letter, we focus our attention on a semiconductor-based platform based on intersubband (ISB) transitions in quantum wells (QW), where USC was first predicted <cit.> and observed <cit.>. Such a platform still remains among the most promising platforms for the study of quantum vacuum effects <cit.>.We specifically consider the geometry sketched in Fig. <ref>(a), based on a planar metallic cavity mode ultrastrongly coupled to an ISB transition in a heavily doped QW, in the following called the dresser.The dressed vacuum of the USC regime is then used to non-perturbatively influence the coupling to the electromagnetic field of another QW, called the emitter. This latter QW is taken to be much less doped, so that the coupling to light of its own ISB transitionis far below the ultra-strong coupling regime. Depending on its strong vs. weak coupling to light,the distortion of the vacuum state in the USC regime is visible as a strong reinforcement (suppression) of the vacuum-field Rabi oscillation frequency or of the spontaneous emission rate when the emitter is resonant with lower (upper) polariton branch.The model— More in detail, we consider a planar electromagnetic cavity of surface S and height L_c, hosting two polarizable QW slabs well separated in space along the direction z perpendicular to the cavity plane. On the cavity side, we focus our attention on the so-called TM_0-modes <cit.>: these modes are polarized along the z-axis and are at the heart of semiconductor-based cavity QED setups based on quantum well ISBtransitions <cit.>.The two QWs (dresser and emitter) are coupled to the cavity mode by the same type of ISB dipole transitions <cit.>. The different QW sizes and doping densities result in different values of the resonant frequency and of the coupling strength to the cavity mode. Throughout this work, we place ourselves in the so-called electric dipole picture, dubbed d· E in the literature <cit.>.In this picture, the total Hamiltonian can be written in the following form <cit.>H ≈H_c-d + ħω_e ∑_k b_k^† b_k+-iħΩ_e/2∑_k√(ω_k/ω_e)( a_k -a_-k^†) ( b_-k + b_k^†),where the emitter-cavity interaction is taken at lowest order in its coupling strength andH_c-d = ħω_d∑_k d_k^† d_k + ∑_kħω_ka_k^† a_k -iħΩ_d/2∑_k√(ω_k/ω_d)(a_k -a_-k^†) (d_-k + d_k^†)+ ħΩ_d^2/4ω_d∑_k(d_k + d_-k^†) (d_k + d_-k^†),describes the (arbitrarily large) coupling of the dresser QW to the cavity. Here, a_k is the annihilation operator of the TM_0-cavity photon mode at in-plane wavevector k, with dispersion relation ω_k. The b_k and d_k operators are, instead, the annihilation operators of collective ISB excitations of wavevector k in the emitter or dresser QW, with a k-independent frequency ω_d,e. Restricting ourselves to a weak excitation regime, the b_k and d_k operators can be safely approximated as bosonic <cit.>.The strength of the light-matter coupling of each QW is quantified by thedresser and emitter Rabi frequencies Ω_ d,e parameters, which are determined by the corresponding 2D electron densities n_ d,e (and thus by the doping density) via the relation Ω_ d,e^2 = f_ d,e e^2 n_ d,e/(ϵ_0 m L_c) <cit.>.Here e is the electron charge and m is the effective electron mass. f_ d,e is the adimensional oscillator strength parameter determined by the overlap of the electronic wavefunctions in the QW <cit.> and exactly equal to 1 in the case of parabolic wells <cit.>.Assuming all other parameters to be constant, the scaling of Ω_ d,e with the doping level n_ d,e allows to experimentally control the strength of the light-matter coupling in the dresser and the emitter. The ISB resonance frequencies ω_d,e are then tuned by the geometry and the depth of the confinement potential in the QWs.Emitter-polariton interactions—As already mentioned, in this work we focus on a regime where the emitter is much less doped than the dresser, n_e ≪ n_ d, so the coupling of emitter to light Ω_e is much smaller than all other frequencies and can be taken at lowest order while the highly doped dresser is in the USC regime,Ω_e ≪{ω_e, ω_ d, ω_k}≃Ω_ d. In this regime, the emitter no longer probes the bare cavity photon modesbut rather the cavity-dresser polariton modes resulting from the hybridization of the cavity photons and the dresser ISB excitations due to H_c-d in (<ref>).When the emitter is in the strong emitter-polariton coupling regime with a Rabi frequency exceeding the polariton linewidths, Ω_e≫γ_ lp,up, the full polariton spectra arising from the mixing of all three modes shown in Fig.<ref>(b) display a selective anticrossing of the emitter (green) mode with the lower (left panel) or the upper (right panel) cavity-dresser polariton depending on the value of the emitter frequency: in the Figure, for each polariton branch, the color indicates the dominant cavity (blue), dresser (red), or emitter (green) nature, and black represents a maximal mixing. As a most remarkable feature, for the same value of Ω_e, we notice that the anti-crossing with the lower polariton is much wider than the one with the upper polariton. In formal terms, we can rewrite the total Hamiltonian of Eq. (<ref>) in the polariton basis asH≈ħω_e ∑_k b_k^† b_k++ ∑_kħω_up, k p_up, k^† p_up, k + ∑_kħω_lp, k p_lp, k^† p_lp, k++ i∑_k( ħΩ_up, k/2 p_up, k b_k^† + ħΩ_lp, k/2 p_lp, k b_k^†) + h.c.,where p_up, k, p_lp, k are the annihilation operators of a cavity-dresser polariton in the upper or lower polariton branches of(<ref>) with eigenfrequenciesω_ up/lp, k^2 = ω^2_k + ω̅_ d^2/2±√(( ω̅_ d^2- ω^2_k)^2/4 + Ω_ d^2ω^2_k). where ω̅_ d=√(ω^2_ d+Ω^2_ d) includes the depolarization shift of the dresser frequency <cit.>. The effective polariton-vacuum Rabi frequencies quantifying the coupling strength between the emitter and the lower/upper cavity-dresser polaritons read (Secs.<ref> and <ref> of the SM)Ω_ up, k/Ω_e=√(ω^2_k/ω_e ω_ up, k)sinθ_k , Ω_ lp, k/Ω_e =√(ω^2_k/ω_e ω_ lp, k)cosθ_k,withcos^2 θ_k = ω_ up, k^2 - ω_k^2/ω_ up, k^2 - ω_ lp, k^2 , sin^2 θ_k = ω_k^2 - ω_ lp, k^2/ω_ up, k^2 - ω_ lp, k^2.Interestingly, all the information regarding the hybridization between the cavity photon and the dresser excitations due to the USC is contained in the hybridization angle θ_k, which summarizes into a single parameter the Hopfield coefficients expressing the polariton operators p_lp, k and p_up, k in terms of the cavity photon a_k and dresser d_k operators and their hermitian conjugates <cit.>. The reformulation in terms of the polariton Hamiltonian (<ref>) provides a physical understanding of the peculiar features displayed by the emitter-polariton coupling that have have observed in Fig.<ref>(b). In Fig. <ref>(a) we show a color plot of the polariton-vacuum Rabi frequencies Ω_ lp, k and Ω_ up,k as a function of the wavenumber k and of the dresser Rabi frequency Ω_ d, when the emitter is resonant with some state on the lower ω_e = ω_ lp, k (left panel) or the upper ω_e = ω_ up, k (right panel) polariton branch. In the full polariton spectrum of Fig.<ref>(b), Ω_ lp/up, k quantifies the magnitude of the Rabi splitting.For a weak dresser Rabi frequencyΩ_ d≪ω_ d, the resonant lower and upper polariton-vacuum Rabi frequencies display similar behavior, with Ω_ lp (up), k≃Ω_e when the polariton has a fully photonic nature and Ω_ up (lp), k≃ 0 when the polariton has a fully excitonic nature. In general, while the photonic weight is redistributed between the upper and lower polariton, the total weight is conserved, Ω_ lp, k^2+Ω_ up, k^2≈Ω_e^2, as a consequence of the weakly dressed regime where ω_ up/lp, k≃ω_k. The physics drastically changes when the dresser enters the USC regime for Ω_ d≃ω_ d. Here ω_ up/lp, k≠ω_k, and the square-root prefactorsin (<ref>) start to matter: the coupling with the lower polariton Ω_ lp, k is reinforced and remains significant up to higher wavenumbers, while the upper polariton's coupling Ω_ up, k is significantly reduced. As an immediate consequence the conservation of the photonic weight is strongly violated, Ω_ lp, k^2+Ω_ up, k^2 ≠Ω_e^2. This quite remarkable behavior is due to the mixing of creation and annihilation operators in the Bogoliubov transformation to polariton operators <cit.>, so that the normal and anomalous terms constructively (destructively) interfere in determining the coupling of the lower (upper) polariton to the emitter.This marked asymmetry of the Rabi splitting of the lower and upper polariton branches is straightforwardly observed in a cavity transmission/reflection spectroscopy experiment. Fig.<ref> displays examples of simulated transmission spectra well in the strong emitter-polariton coupling regime Ω_ lp/up, k≫γ_ lp/up where all polariton branches are well separated.Quantum fluctuations in the USC vacuum —In order to obtain a deeper understanding of the relation between the modified emitter light-matter coupling strength and the properties of the USC dressed vacuum, we evaluate the fluctuations of the cavity electric displacement field D_k = i √(ϵ_0 ħω_k/(2SL_c))(a_k -a_-k^†).Within our dipole representation, this field -rather than the electric field- represents in fact the correct electromagnetic degree of freedom to describe light-matter interactions <cit.>.By using the hybridization angle θ_k defined above, we can express this quantity as (Sec.<ref> of the SM)⟨ vac|D^2_k | vac⟩/(ϵ_0ℰ_k)^2 =ω_k/ω_ up, ksin^2 θ_k + ω_k/ω_ lp, kcos^2 θ_k = ω_e/ω_k(Ω_ lp, k^2/Ω_e^2+Ω_ up, k^2/Ω_e^2),where ℰ_k^2 = (ħω_k)/(2ϵ_0 S L_c) are the quantum fluctuations of the electric (or, in this case equivalently, of the displacement) field in a bare cavity. The peculiarity of the USC is then visible in Fig. <ref>(b), where we display a color plot of the total electric displacement fluctuations in the different k modes as a function of the strength of the cavity dresser coupling Ω_d. On the one hand, for weak or moderate Ω_d, the prefactors ω_k/ω_{up,lp},k on the first line in Eq. (<ref>) are close to one and thus play a minor role; thanks to the trigonometric identity sin^2θ_k+cos^2θ_k=1 associated to the conservation of the photonic weight, the two contributions then sum up to the standard bare vacuum fluctuations. On the other hand, the total fluctuations get substantially increased in the USC regime, in connection with the increased value of the lower polariton-vacuum Rabi frequencies. More in specific, the second line of(<ref>) shows that the contributions of the lower and upper polariton frequencies to the vacuum fluctuations have an amplitude proportional to the polariton-emitter Rabi frequencies Ω_ lp/up, k, a result that extends to the USC vacuum case the traditional concept of vacuum-field Rabi splitting <cit.>. Spontaneous emission —While the Rabi splitting in the strong emitter-polariton coupling regime provides a most direct experimental signature of the modified Rabi couplings (<ref>), a related effect is visible also in the spontaneous emission rate in the weak emitter-polariton coupling regime γ_ lp/up≫Ω_ lp/up, k where the Rabi oscillations are replaced by an irreversible emission process.In our system, the spontaneous emission is mediated by the resonant polariton mode and its rate reads <cit.> Γ_ lp/up, k≈Ω_ lp/up, k^2/γ_ lp,up:in analogy with the corresponding modification of the Rabi splitting in the strong coupling regime, the effect of the USC vacuum state in the weak emitter-polariton coupling regime will be to reinforce (suppress) the spontaneous emission into the lower (upper) polariton by an amount related to the modified quantum fluctuations of the electric displacement field (<ref>).In physical terms, this prediction can be understood as a novel form of the Purcell effect <cit.>, where one acts on the modification of the overall intensity of the field fluctuations in the USC vacuum rather than on just their frequency redistribution. Interestingly, our result is in line with the theory of spontaneous emission in dielectric media,and has the key advantage of a clear disentanglement of local field effects <cit.> Discussion — Our previous derivations were carried out within a quantum language, so the modifications of the vacuum-field Rabi splitting and of the spontaneous emission rate were naturally related to the ones of the zero-point fluctuations (<ref>). In connection to the intense debate that took place on the physical origin of spontaneous emission in terms of quantum fluctuation and/or radiative reaction effects <cit.>, it is interesting to assess whether our predictions can be equivalently expressed in classical terms of radiative reaction by the dressed electromagnetic field in the USC device. As a first step in this direction, it is straightforward to notice that our theory can be directly reformulated in terms of motion equations for the cavity a_k and the emitter b_k and dresser d_k polarization fields stemming from the Hamiltonian (<ref>) and re-interpreted as classical variables. This set of equations can be summarized in a dispersion relation that can equivalently derived directly from Maxwell equations (Secs.<ref>-<ref> of the SM) of the form ω^2 ϵ_r(ω)=c^2 k^2, with the effective dielectric permittivityϵ_r(ω) = 1/1-Ω_d^2/ω̅_d^2-ω^2 - iκ_ dω-Ω_e^2/ω̅_e^2-ω^2- iκ_eω,that is an analog of the Clausius-Mossotti formula in a multiple slab geometry <cit.> (here κ_ d, κ_e represent the dresser and emitter radiative losses, as already introduced in Fig. <ref>). Interestingly, the deviation of this expression from a naive simple sum of the individual slabs' permittivities, ϵ_r(ω ) ≈ 1 + Ω_ d^2/(ω̅_ d^2 - ω^2 - iκ_ dω) + Ω_e^2/(ω̅_e^2 - ω^2 - iκ_eω) provides a signature of the key role played by the electrostatic interaction between the emitter and the dresser noticed in <cit.>, whose strength is indeed proportional to the product of the electron densities n_dn_e∼Ω^2_dΩ_e^2. Once again, ω̅^2_d,e=ω^2_d,e+Ω_d,e^2 are the depolarization-shifted values of the dresser and emitter frequencies <cit.>. This formulation in terms of classical equations of motion is the strong emitter-polariton coupling generalization of the radiative reaction equations <cit.> in the case of a resonantly peaked density of states of the electromagnetic modes.The analogy to the radiative reaction equations is even closer in the weak emitter-polariton coupling regime: close to resonance to the lower (upper) polariton mode, integrating out the polariton field leads to a radiative damping term of the usual form (Sec. <ref> of the SM)ḃ_k= -iω_eb_k -Ω_ lp (up), k^2/(2γ_ lp (up)) b_k. Physically, this close correspondence between the radiative reaction and the quantum fluctuations is a direct manifestation of the fluctuation-dissipation theorem relating the quantum fluctuations of the dressed polariton field in its vacuum state and the susceptibility of vacuum to the polarization of the emitter <cit.>. All together, these arguments show how the whole vacuum phenomenology described so far can beequivalently re-interpreted in terms of electrodynamics in dense dielectric media. Conclusions and outlook —In this work, we have theoretically investigated how basic light-matter interaction processes are modified in the distorted vacuum state of a semiconductor-based cavity QED system in the ultra-strong coupling (USC) regime. In particular, we have focused our attention on quantities of direct experimental accesssuch as the vacuum-field Rabi splitting of an additional emitter strongly coupled to USC polaritons, and its spontaneous emission in the weak emitter-polariton coupling limit. In both cases, a signature of the distorted vacuum state of the USC regime is visible as a marked asymmetry between the polariton branches.Even though our discussion is focused on a specific material system of major experimental interest, the predicted effects generally apply to any optical system in the USC regime. As such, our conclusions are of interest for a broad community of researchers, from circuit-QED devices, to semiconductor optoelectronics and terahertz optics and validate the picture that engineering the QED vacuum is indeed a powerful tool to control optical processes and, on the longer run, possibly also manipulate the properties of materials <cit.>. From a conceptual standpoint, we have shown that our predictions can be equivalently understood in terms of classical radiative reaction in dielectric materials or in terms of quantum fluctuations in the distorted USC vacuum state, the two pictures being connected by the fluctuation-dissipation theorem. On one hand, the connection to classical radiative reactionprovides a new point of view on vacuum effects as a tool to control materials, a concept that is attracting a growing interest, but is typically investigated within a quantum language <cit.>. On the other hand, the quantum picture provides a transparent framework to go beyond a Fermi golden rule analysis of spontaneous emission in dielectric materials and describe the interplay of ultra-strong coupling with more sophisticated light-matter interaction phenomena such as parametric downconversion, which directly involve the amplification of quantum fluctuations. We are grateful to Raffaele Colombelli, Francesco Pisani, Yanko Todorov, Marco Schirò, Simone De Liberato, Peter Rabl and Giuseppe La Rocca for useful discussions. We acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101002955 – CONQUER), from the Provincia Autonoma di Trento, the Q@TN initiative, and the PNRR MUR project PE0000023-NQSTI. mybibstyleSupplementary material for:Modification of light-matter interactions in the vacuum of ultrastrongly coupled systems ^1Pitaevskii BEC Center, INO-CNR and Dipartimento di Fisica, Università di Trento I-38123 Trento, Italy ^2JEIP, USR 3573 CNRS, Collège de France, PSL Research University, 11 Place Marcelin Berthelot,F-75321 Paris, FranceSupplementary material for:Modification of light-matter interactions in the vacuum of ultrastrongly coupled systems Daniele De Bernardis^1,2, Gian Marcello Andolina^2, and Iacopo Carusotto^1 January 14, 2024 ===================================================================================================================== § INTERSUBBAND POLARITONS IN THE DIPOLE REPRESENTATIONWe review here the details of the derivation of the intersubband (ISB) polariton Hamiltonian valid in the ultrastrong coupling (USC) regime, based on Ref. <cit.>. Initially we focus only on a single quantum well (QW) and in the next section we generalize it to a multi-well configuration.The polarization density can be written explicitly in terms of raising/lowering operators of the respective intersubband transition <cit.>P(r) = e/S∑_μ >ν, kζ_μν(z) e^i k·r_∥( B_μνk + B_μν -k^†) u_z,Here e is the electron charge, S is the surface area of the slab (assumed to be equal to the cavity) and u_z is a unit vector along the z-axis. The in-plane position is r_∥ = (x,y,0) and k = (k_x, k_y, 0) is the in-plane wavector. The dipole density is given by ζ_μν(z) = z ψ_μ^*(z)ψ_ν(z).Here ψ_μ(z) is the wavefunction of the μ-th level of the QW <cit.>, defined in the interval [z_ qw-L_ qw/2, z_ qw+L_ qw/2 ], where L_ qw is the size of the quantum well along the z-axis and z_ qw is its central position. The ISB transition operators are given byB_μνk = ∑_qΨ̃^†_ν, q-kΨ̃_μ,q,where Ψ̃_μ, k is the Fermionic operator that annihilates an electron with in-plane momentum k in the μ-th subband. It is convenient to rewrite this operator asB_μνk = √(N_μν) b_μνk.where N_μν = ⟨N̂_ν|-⟩⟨N̂_μ|$⟩ is the electron population imbalance between two subbands.Considering only low energy transitions which take place just from the ground state (Fermi sea) to an upper band and assumingN_μν ≫1(heavy doping), we have that [b_μνk, b_μ ' ν ' k'^†] ≃δ_μμ 'δ_νν 'δ_k k'and the transition operators behave as Bosonic creation/annihilation operators for the collective excitation.Considering only transitions from the lowest subband, in the low excitation limit, we have thatN_μν=0 ≈N, whereNis the total number of electrons. From now on we always consider only transitions from the lowest subband, suppressing the indexν. The QW Hamiltonian is then just given by a collection of harmonic oscillatorsH_ qw = ħω_ qw∑_k b_k^† b_k. The light-matter Hamiltonian for the QW coupled to the cavity field can be derived by considering the total energy of the system, which is given by the matter's energy plus the electromagnetic energy <cit.> H= H_ qw + H_ em = H_ qw + ∫ d^3 r [ E^2(r) /2 ϵ_0 + ϵ_0 c^2/2B^2(r)].The coupling between matter and the electromagnetic field is provided by the fact that the charged matter generates and changes the electric and magnetic field.In the so-called dipole gauge this is realized by the following minimal coupling substitution in the cavity electric field energy density <cit.> ϵ_0 E(r) = D(r) - P(r).Notice that here we take only the displacement field to be transverse <cit.>, such that∇⃗·D(r) = 0. In this way we correctly recover Maxwell equations with a polarizable medium, where∇⃗·E(r) = - ∇⃗·P(r)/ϵ_0 = ρ_b(r)/ϵ_0, whereρ_b(r)is the so-called bound charge density <cit.>.The total light-matter Hamiltonian is then given by <cit.>H = H_ qw + ∫ d^3 r [ ( D(r) - P(r) )^2/2 ϵ_0 - ϵ_0 c^2/2A(r)·∇^2 A(r)]. The electric displacement in the cavity made of two perfect parallel mirrors can be written as <cit.> D(r) = i ∑_n,k, σ√(ϵ_0 ħω_n,k/2)[ w_n,k^σ(r_∥, z) a_n,k -(w_n,k^σ(r_∥, z) )^* a_n, -k^†]. Here the cavity mode functions are solutions of the Poisson equation and can be written as <cit.> w_n,k^σ(r_∥, z) = e^i k·r_∥/√(S)√(2/L_c(1+δ_n,0))[ iε^(x)_n,k, σsin (k_n z); iε^(y)_n,k, σsin (k_n z);ε^(z)_n,k, σcos (k_n z);] ,whereS=L_xL_yis the surface area of the rectangular cavityL_cis the cavity height, andk_n=π/L_c n, withn=0,1,2…is the wavevector along the cavity axis, where metallic boundary conditions are assumed.The indexσ=1,2is the polarization index.Considering only ISB transitions from the lowest subband, the interaction light-matter Hamiltonian for a single QW placed atz=z_qwisH_I= ∫ d^3 r 1/ϵ_0[ - D·P + P^2/2] = -iħω_P /2∑_n,k, σ, με^(z)_n,k, σ√(f^n_μ 0ω_n,k/ω_μ 0) (a_n, k -a_n, -k^†) ( b_μ -k + b_μk^†)+ ħω_P^2 /4∑_k∑_μ, μ'I_μμ '/√(ω_μ 0ω_μ' 0)(b_μk + b_μ -k^†) (b_μ' k + b_μ' -k^†)Here the main coupling parameter is the plasma frequency ω_P^2 = N e^2/ϵ_0 m SL_c, andmis the electron effective mass. The generalized oscillator strength is defined asf^n_μ 0 = 2mω_μ 0/ħ2/1+δ_n0[ ∫_-∞^∞dz ζ_μ 0(z)cos( k_n z ) ]^2.Forn=0it coincides with the usual oscillator strength of theμ-th dipole transitionf_μ0= 2m ω_μ0 z_μ0^2/ħ, where z_μ 0 = ⟨μ | z | 0|=⟩∫_-∞^∞dz ζ_μ 0(z)is the ISB dipole matrix element. At highern>0, it also contains all the multipoles matrix elements, as it has to be by interacting with a non-homogeneous electric field. The higher then-th mode is the more multipolar will be the ISB excitation.In the last term of Eq. (<ref>) we introduced theP^2-interaction tensor strengthI_μμ ' = 4m^2 L_c/ħ^2ω_μ 0ω_μ' 0∫_-∞^∞dz ζ_μ 0(z)ζ_μ' 0(z)= 4m^2 L_c/ħ^2ω_μ 0ω_μ' 0∫_-∞^∞dz dz' δ(z-z')ζ_μ 0(z)ζ_μ' 0(z')= 4m^2 L_c/ħ^2ω_μ 0ω_μ' 0∑_n=0∫_-∞^∞dz ζ_μ 0(z) χ_n(z)×∫_-∞^∞dz ζ_μ ' 0(z) χ_n(z),where in the last equality we introduced a complete basis such that∑_n=0χ_n(z)χ_n(z') = δ(z-z').Truncating the ISB transitions toμ=1only, we suppress also the indexμhavingb_μ, k↦b_kand definingω_qw = ω_10andf^n_qw = f^n_1 0. The interaction Hamiltonian becomesH_I ≈ -iħω_P /2∑_n, k, σε^(z)_n,k, σ√(f^n_ qwω_n,k/ω_ qw) (a_n, k -a_n, -k^†) ( b_ -k + b_k^†) + ħω_P^2 /4ω_ qw∑_k, nf_ qw^n(b_k + b_ -k^†) (b_k + b_ -k^†)where we have usedχ_n(z) = √(2/(L_c(1+δ_n,0)))cos(k_n z)<cit.>.The ISB transition couple mostly with the so-called TM_0mode <cit.>, which corresponds to then=0mode, entirely polarized along the cavity axis (z-axis), withε^(z)_0,k, σ=δ_z,σ. The light-matter Hamiltonian can be further simplified toH ≈ħω_ qw∑_k b_k^† b_k + ∑_kħω_k a_k^† a_k -iħω_P /2∑_k√( f_ qwω_k/ω_ qw) (a_k -a_ -k^†) ( b_ -k + b_k^†)+ ħω_P^2 /4ω_ qw∑_k f_ qw(b_k + b_ -k^†) (b_k + b_ -k^†),where we have suppressed thenindex everywhere. After all these passages we are left with an effective light-matter Hamiltonian describing the interaction with only the TM_0cavity mode <cit.>, which can be rewritten asH_ TM_0 = H_ qw + L_c∫ d^2 r [ ( D(r_∥) - P(r_∥) )^2/2 ϵ_0 - ϵ_0 c^2/2A(r_∥) ∇^2 A(r_∥)].Here the displacement and the vector potential fields are scalar variables, meant to describe thez-component of the TM_0modes, which are independent of the position along thez-axis (perpendicular to the cavity plane) and thus depends only from the in-plane positionr_∥D(r_∥ ) = i ∑_k√(ϵ_0 ħω_k/2SL_c) e^i k·r_∥(a_k -a_-k^†)A(r_∥ ) = ∑_k√(ϵ_0 ħ/2SL_cω_k) e^i k·r_∥(a_k +a_-k^†),with[a_k, a_k '^†] = δ_k,k '. The effective polarization density coupled to the TM_0modes is also a scalar quantity, and is given byP(r_∥) = ez_10√(N)/SL_c∑_ke^i k·r_∥( b_k + b_-k^†),It is worth noticing that nor the electromagnetic field densities nor the polarization densities of this TM_0-effective-description depends from thez-coordinate and so the overall physics does not depend from the position of the QW inside the cavity or its distance with respect to one or the other metallic plate. This is in principle in contradiction with basics electromagnetism, where any charge configuration between metallic plates would generates image charges which will affect its behaviour as a function of the distance from the plate <cit.>. This contradiction emerges as a consequence of the truncation to the TM_0mode only, for which we have discarded all the information regarding the presence of image charges and eventual Coulombic corrections. However we argue that this corrections are most often negligible for our specific aims. § POLARITON HAMILTONIAN FOR TWO STACKED WELLSSince the polarization of the matter inside the cavity is given by the sum over all its individual components, i.e. as a sum over all different quantum wellsP(r) =∑_i=1^N_ qwP_i(r),and each polarization do not overlap with the others∫d^3rP_i(r)·P_j(r) = 0ifi≠j, we can easily generalize the Hamiltonian in Eq. (<ref>) to a multi-wells setup. Notice that this condition of non-overlapping polarization densities is exactly the condition of localized dipoles used in the standard applications of the dipole picture <cit.>. As a consequence the direct Coulomb interaction between different QWs disappears in favour of a fully local mediated interaction through the dynamical cavity field.In the simplest case of two QWs, the dresser and emitter, as in the main text, the resulting Hamiltonian isH = H_ d + H_e + L_c∫ d^2 r [ D^2/2 ϵ_0 - ϵ_0 c^2/2A∇^2 A ] + L_c∫ d^2 r 1/ϵ_0[ - D P_ d + P_ d^2/2] + L_c∫ d^2 r 1/ϵ_0[ - D P_e + P_e^2/2].Since the two QWs have different electron numberN_d,N_e, they also have different plasma frequencies. These can be cast into the two Rabi frequenciesΩ_ d^2 = f_ dN_ d e^2/ϵ_0 m SL_c, Ω_e^2 = f_e N_e e^2/ϵ_0 m SL_c.Notice that here we are using all the definitions of Sec. <ref> replacing all the subscriptsqw↦d, e.Introducing the creation-annihilation operators like in Sec. <ref>, the total cavity QED Hamiltonian for the dresser and emitter QWs is thus given byH =ω_ d∑_k d_k^†d_k + ω_e∑_k b_k^†b_k + ∑_kω_k a^†_ka_k-iħΩ_ d/2∑_k√(ω_k/ω_ d)( a_k - a_-k^†)( d_-k + d_k^†) -iħΩ_e/2∑_k√(ω_k/ω_e)( a_k - a_-k^†)( b_-k + b_k^†) + ħω_ d^2/4ω_ d∑_k( d_-k + d_k^†)( d_-k + d_k^†) + ħω_e^2/4ω_e∑_k( b_-k + b_k^†)( b_-k + b_k^†). Assuming that the emitter polarization is always small, such that it never goes in the ultrastrong coupling regime we can neglect the∼P_e^2-term of the emitter, which is the last term in Eq. (<ref>). In this way we recover the cavity-dresser-emitter Hamiltonian in the main text.§ POLARITONS FROM CANONICAL TRANSFORMATIONSHere we consider the cavity-dresser Hamiltonian only defined in Eq. (<ref>) In this section we will suppress the vector notation for the in-plane wavevectork ↦kin order to simplify the notation.We then reintroduce the real canonical coordinates defining the quadrature operatorsD_k= -i√(ω_k/2)( a_k - a^†_-k), A_k= 1/√(2ω_k)( a_k + a^†_-k),Π_k= -i√(ω_ d/2)( d_k - d^†_-k),X_k=1/√(2ω_ d)( d_k + d^†_-k).The cavity-dresser Hamiltonian becomesH_c- d = ∑_k [ Π_k^2/2 + ω_ d^2 + Ω_ d^2/2 X_k^2 ] + +∑_k [ D_k^2/2 + ω_k^2/2 A_k^2 ] + ∑_k Ω_ d D_k X_k.We make a canonical transformation (just a re-labelling)D̃_k = - ω_k A_k,Ã_k = 1/ω_k D_k,We then haveH_c- d = ∑_k [ Π_k^2/2 + ω_ d^2 + Ω_ d^2/2 X_k^2 ] + + ∑_k [ D̃_k^2/2 + ω_k^2/2Ã_k^2 ] + ∑_k Ω_ dω_k Ã_k X_k.The Hamiltonian in this form can be cast to a matrix, consideringH_c-d = 1/2 v^T M v, wherev^T = (X_k, Ã_k, Π_k, D̃_k) , andM = [ ω_ d^2 + Ω_ d^2 Ω_ dω_k 0 0; Ω_ dω_k ω_k^2 0 0; 0 0 1 0; 0 0 0 1; ]DiagonalisingMis now equivalent to diagonalise the Hamiltonian. We can achieve this by considering the following unitary transformationU = [R 0_2× 2; 0_2× 2R;]whereRis a2×2rotationR = [cosθsinθ; -sinθcosθ; ]In order to further simplify all the expressions we make implicit the dependence from the wavenumberq. Moreover we defineΩ_X = ω_ d^2 + Ω_ d^2 G=Ω_ dω_k Ω_A = ω_k^2soM =[ Ω_X G 0 0; G Ω_A 0 0; 0 0 1 0; 0 0 0 1; ]We then findcosθ= √(1/2( 1 + Ω_X - Ω_A/√(( Ω_X - Ω_A )^2 + 4 G^2)) ) = √(ω_ up^2-ω_k^2/ω_ up^2 - ω_ lp^2) sinθ= √(1/2( 1 - Ω_X - Ω_A/√(( Ω_X - Ω_A )^2 + 4 G^2)) ) = √(ω_k^2-ω_ lp^2/ω_ up^2 - ω_ lp^2)with the diagonal matrixM' = [ ω_ up^2 0 0 0; 0 ω_ lp^2 0 0; 0 0 1 0; 0 0 0 1; ]whereω_ up/lp^2 = Ω_X + Ω_A/2±1/2√(( Ω_X - Ω_A )^2 + 4G^2)Working in the canonical representation has the convenience that, together with the eigenfrequencies, we have full access on the hybridization of the degrees of freedom due to the USC regime. Differently from previous works <cit.>, we can condense the full knowledge of the four Hopfield coefficients <cit.>, in a single mixing angleθ_ k.In Fig. <ref> we show the behaviour of thecanonical Hopfield coefficients as a function of the in-plane wavenumberkand the dresser Rabi frequencyΩ_d. When the dresser is only weakly doped, andΩ_d≪ω_dwe observe a very small hybridization (cos∼1andsin∼0or vice versa), which become substantial only when the TM_0is resonant with the dresser frequency atc k = ω_d(andcos∼sin∼1/2). On contrary, when the dresser is in the USC regimeΩ_d ≃ω_d, the light-matter hybridization becomes important on a large range of wavenumbers.The USC Hamiltonian in the new diagonal variables looks like two uncoupled harmonic oscillators, representing the upper/lower polaritonsH_c- d = Λ_ up^2/2 + ω_ up^2/2ξ_ up^2 + Λ_ lp^2/2 + ω_ lp^2 /2ξ_ lp^2where[ ξ_ up; ξ_ lp; Λ_ up; Λ_ lp; ] = U [X;Ã;Π; D̃;] Evidently the eigenstates of Ham. (<ref>) are given by the polaritonic operatorsp_ up/lp = 1/√(2 ω_ up/lp)Λ_ up/lp - i √(ω_ up/lp/2)ξ_ up/lp § THE POLARITONIC PHOTONFor a moment we focus only on the cavity-dresser Hamiltonian. Using the canonical transformation diagonalization method explained in Appendix <ref> we can rewrite the cavity-dresser plasma Hamiltonian asH_c- d = ∑_k[ħω_ lp, kp_lp, k^† p_lp,k+ħω_ up, kp_up, k^† p_up,k] .Herep_ lp,kandp_ up,krepresent the annihilation operators of the lower and upper polaritons, and their frequencies are given by (see Sec. <ref> of the SM )ω_ up/lp, k^2 = ω^2_k + ω_ d^2 + Ω_ d^2/2±1/2√(( ω_ d^2 + Ω_ d^2 - ω^2_k)^2 + 4Ω_ d^2ω^2_k). The new polaritonic variables represent the correct degrees of freedom to describe the cavity-dresser system, and consequently all the physical quantities must be rewritten in this basis. Since that the coupling between the cavity and the emitter is given by H_ int,e-c = -1/ϵ_0∫ d^3 r D(r) ·P_e(r),for our aims we mainly need to transform the cavity electric displacement field, which can be rewritten following Sec. <ref> of the SM and using the transformation in Eq. (<ref>) D(r) = i √(ϵ_0 ħ/2SL_c)∑_kω_ke^i k·r_∥( sinθ_k/√(ω_ up, k) p_up, k + cosθ_k/√(ω_ lp, k) p_lp, k) +h.c..Using the expression for the emitter polarization given in the main text, and assuming that the emitter is weakly doped, we can perform the rotating-wave approximation (RWA) in the emitter-cavity interaction in Eq. (<ref>).It is worth noticing that the RWA cannot be implemented in Eq. (<ref>) by only discarding the terms∼( a_kb_k ), (a^†_-kb^†_-k), but it requires to switch on the polaritonic picture, and considering the electric field displacement given by Eq. (<ref>). This is very similar to what happens in the open driven/dissipative description of USC, where one has to switch to the polaritonic picture in order to identify the positive/negative frequencies operators that, coupling to the external bath, form the correct jump operator of the system <cit.>.For the sake of completeness is also worth to calculate the dresser polarization in the polariton basis, that will be useful in the next sections. It readsP_ d(r) = √(ϵ_0ħ/2SL_cΩ_ d^2/ω_ d)∑_ke^i k·r_∥(√(ω_ up, k/ω_ d)cosθ_k p_up, k + √(ω_ lp, k/ω_ d)sinθ_kp_lp, k) +h.c. § HYBRIDIZATION ANGLE AND VACUUM OBSERVABLES The interest in the hybridization angleθ_kis not only limited in understanding the cavity-dresser components of the polariton excitations, but it is also linked to the understanding on how the vacuum of quantum electrodynamics is modified by the presence of matter. Indeed, by using again the canonical formalism in Sec. <ref> of the SM, we can compute the expectation value of any observable over the USC polaritonic vacuum|vac ⟩, defined by p_ up/lp, k| vac ⟩ = 0. Using Eqs. (<ref>)-(<ref>) we can calculate the vacuum fluctuations of the cavity electric field considering thatE_ TM_0(r) = D(r)-P_ d(r)/ϵ_0.We thus have that⟨ vac|E_ TM_0, k^2 | vac⟩ =1/ϵ_0⟨ vac|D^2_k | vac⟩ + ⟨ vac |P_ d, k^2|vac⟩ = ℰ_k^2 [ ω_k/ω_ up, ksin^2 θ_k +ω_k/ω_ lp, kcos^2 θ_k + Ω_ d^2/ω_ d^2( ω_ lp, k/ω_ksin^2θ_k + ω_ up, k/ω_kcos^2 θ_k) ],whereℰ_k^2 = (ħω_k)/(2ϵ_0 S L_c). We immediately notice from the last line of this formula that the electric field fluctuations take a large contribution from the fluctuations of the dresser polarization, which are given by the intrinsic fluctuations of matter. Moreover, after a few algebraic steps, we have that⟨ vac |P_ d, k^2 |vac⟩ = Ω_ d^2/ω_ dω_k⟨ vac|D^2_k | vac⟩,from which we arrive to⟨ vac|E_ TM_0, k^2 | vac⟩ =1/ϵ_0[ 1 + Ω_ d^2/ω_ dω_k] ⟨ vac|D^2_k | vac⟩ .It is worth noticing that - being a gauge non-invariant quantity - the physical significance of the electric displacement is sometimes considered obscure and confusing <cit.>. However, we can see here that in the dipole picture, the electric displacement is directly related to the TM_0electric field fluctuations and it thus realizes a good proxy to explore the USC modifications of the electric field fluctuations. Another interesting example that we report for completeness is the cavity virtual photon populationN_ ph, k = ⟨ vac |a_k^† a_k|vac|=⟩sin^2 θ_k/4ω_kω_ up, k^2 + ω_k^2/ω_ up, k + cos^2 θ_k/4ω_kω_ lp, k^2 + ω_k^2/ω_ lp, k - 1/2,and the bare-dresser virtual excitation populationN_ d, k = ⟨ vac |d_k^† d_k|vac|=⟩sin^2 θ_k/4ω_ dω_ lp, k^2 + ω_ d^2/ω_ lp, k + cos^2 θ_k/4ω_ dω_ up, k^2 + ω_ d^2/ω_ up, k - 1/2. For many years, these quantities were at the center of the discussions around polaritonic vacuum observables <cit.>.However, their individual relevance is now considered marginal, since their physical meaning explicitly depends from the chosen representation <cit.>. They are important only when correlated withphysical gauge invariant quantities.An example of gauge-invariant quantities is the differential zero point frequency of the systemΔω_ZP<cit.>. This is obtained subtracting the bare total zero point frequency for vanishing light-matter coupling from the interacting one. Taking the vacuum expectation value of the cavity-dresser Hamiltonian in Eq. (<ref>) we have thatΔω_ ZP = ω_ up, k + ω_ lp, k/2 - ω_k+ω_ d/2 = ω_kN_ ph, k + ω_ d N_ d, k + Ω_ dN_ int, k.Here the last term represents the interaction energy, defined byN_ int, k =i/2√(ω_k/ω_d)⟨ vac | (a_k^† -a_-k^) (d_-k^ + d_k^†) |vac⟩ + ⟨ vac | Ω_d/4ω_d(d_k^ + d_-k^†) (d_k^ + d_-k^†) |vac⟩ .It is worth noticing that this term contains both the cavity-dresser interaction and the dresser self interaction, which is notoriously known as theP^2-term <cit.>, responsible of the so-called polariton gap.Interestingly, for the resonant wavevectork_res, such thatω_k_res = ω_d, the interaction energy exactly vanishesN_ int, k_ res = 0,because the positive dresser self interaction (P^2-term) exactly compensates the negative cavity-dresser contribution in Eq. (<ref>). As a consequence the differential zero point frequency is completely determined by the cavity and dresser virtual excitation, taking the simple expressionN_ ph, k_ res = N_ d, k_ res = 1/2(√(1+Ω_ d^2/4ω_ d^2) - 1 ).In this case the virtual photon numberN_ph, k_resrepresents the electromagnetic energy that can be released by an instantaneous suppression of the cavity-dresser coupling <cit.>. § CLASSICAL THEORY OF TRANSMISSION SPECTRA Here we derive the linear response theory following from our cavity-dresser-emitter system. We start by considering the total Hamiltonian given in Eq. (<ref>) and rewriting it using the quadrature canonical representation for the cavity, the dresser and the emitter degrees of freedom as given in Sec. <ref> of the SMH = 1/2 v^T M_ tot v,wherev^T = (X_d(k), X_e(k), Ã_k, Π_d(k), Π_e(k), D̃_k)is the array containing the canonical variables obtained generalising the definition in Eq. (<ref>) to the emitter, while the Hamiltonian matrix is given byM_ tot = [ ω_ d^2 + Ω_ d^2 0 Ω_ dω_k 0 0 0; 0 ω_e^2 + Ω_e^2Ω_eω_k 0 0 0; Ω_ dω_kΩ_eω_k ω_k^2 0 0 0; 0 0 0 1 0 0; 0 0 0 0 1 0; 0 0 0 0 0 1; ] From here we can write down the equation of motion of the system (using the Hamilton equations), which match the standard dielectric description from classical electromagnetism. In the frequency domain, the equation of motion readℳ_k(ω) ·[ A_k; X_ d(k);X_e(k); ] = 0where the dynamical matrix is given byℳ_k(ω) = [ ω_k^2 - (ω+iγ/2)^2 iωΩ_ diωΩ_e;-iωΩ_ d ω_ d^2 - (ω+iκ_ d/2)^2 -Ω_ dΩ_e; -iωΩ_e -Ω_ dΩ_e ω_e^2 - (ω+iκ_e/2)^2;]Notice that here we introduce the cavity, dresser and emitter lossesγ, κ_d, κ_ein a phenomenological way, just inserting a viscous damping in the equations. The cavity transmission reads <cit.> T_c(k, ω) = γω_k [ℳ^-1_k ]_00. § DETAILS ON THE HYBRIDIZATION ANGLE MEASUREMENT PROTOCOL Here we give a detailed description of the protocol to measure the cavity-dresser hybridization angle from the emitter-cavity-dresser spectrum.We callω̅_e+, ω̅_e-the frequencies measured from the cavity transmission at the minimal anticrossing between the emitter and the cavity-dresser polaritons, whilek̅_xis the wavevector realizing the minimal anticrossing (we will keep using the bar·̅only for quantities which are directly measured from an eventual experiment, and distinguish them from quantities derived from the theory). We can measurek̅_x, ω̅_e+, ω̅_e-directly from a transmission (reflection) experiment by only detecting the two peaks around the emitter frequency.Sinceω̅_e± ≃ω_up/lp ±Ω_up/lp, the upper/lower polariton frequency resonant with the emitter is given byω̅_ up/lp = ω̅_e+ + ω̅_e-/2.(Notice that the distinction between upper and lower polariton is also experimentally well defined since the two polaritons are separated by a gap, making them clearly distinguishable). The measured emitter-polariton Rabi splitting is then given byΩ̅_ up/lp = ω̅_e+ - ω̅_e-/2. Following Eq. (<ref>), we have that the cavity-emitter anticrossing becomes a probe of the hybridization angleθ_k, through the relation[ ω̅_ upΩ̅_ up]_k/[ ω̅_ lpΩ̅_ lp]_k = tanθ_k,where[·]_kindicates that the data inside the square brackets are measured from the minimal anticrossing happening at the wavevectork̅_x = k. Repeating this protocol many times while sweepingω_eand collecting all the data for each emitter resonance, we can reproduce the tangent of the mixing angle using Eq. (<ref>), effectively realizing a full tomography of the USC cavity-dresser polaritons.This USC tomographic approach is naturally limited by the quality factors of the dresser, emitter and cavityQ_d/e=ω_d/e / κ_d/e,Q_k = ω_k / γ, for which if the coupling between the emitter and the upper polariton at low wavevector is too small to be resolved,Ω̅_up/lp≪γ,κ_d/e, it is impossible to correctly identify the peaks in the transmission. In Fig. <ref> we show an example of this reconstruction mechanism. Even if here we show a specific example regarding ISB transition in a TM_0cavity with linear dispersion, it is important to highlight that our formalism and the resulting tomography protocol are independent of the specific cavity QED implementation. Not only does our description apply to any different cavity dispersions, by only inserting the specificω_kin all the equations, but it applies also to any device that couples to the cavity electric field through a dipole transition. A more detailed discussion about the generality of our analysis will be contained in Ref. <cit.>.In the small wavevector region of Fig. <ref> the detection is limited by the vanishing coupling to the upper polariton, as described in Fig. <ref>(right panel). In such circumstances our algorithm to reconstruct the Rabi splitting gives artificially larger data, predicting a wrong hybridization angle. Also in the other regions our data in Fig. <ref> are affected by numerical noise from the error committed in the peak detection due to the broadening of the transmission peaks given by the linewidthsγ, κ_d, κ_e.Despite that we could have completely avoided this noise by increasing our simulation's numerical accuracy, we decided to keep it in order to simulate how realistic data could be treated in an experiment and to show that our protocol works even in the presence of noisy data.§ DIPOLAR CQED WITH SLABS In this appendix we re-derive the whole theory in the main text starting only from Maxwell equations.∇⃗·E = ρ/ϵ_0∇⃗×B = 1/c^2( J/ϵ_0 + ∂/∂ tE)∇⃗·B = 0∇⃗×E = - ∂/∂ tB Since we have only dipolar matter we have thatρ = - ∇⃗·P_ tot,whereP_tot = ∑_a P_ais the total polarization vector of all the matter in the system, and the current is given byJ = ∂_t P_ tot Taking the rotor of Eq. (<ref>) and combining with Eq. (<ref>) we have that- ∇^2 E + ∇⃗(∇⃗·E) = - 1/c^2∂_t^2 E - 1/c^2ϵ_0∂_t^2 P_ tot.To this point, we cannot solve this equation, since it is still coupled with the Gauss law∇⃗·E = -∇⃗·P_ tot/ϵ_0. We then make an arbitrary split in the electric field, defining a longitudinal and transverse partE = E^∥ + E^⊥,whereE^∥ = ∇⃗( G⋆∇⃗·P_ tot )/ϵ_0andE^⊥ = ∂_t A.HereGis the Green's function of the Poisson equation-∇^2G(r, r') = δ(r - r'), and⋆denotes the convolution operator. Evidently∇⃗×E^∥ = 0and the vector is longitudinal by definition, even in the standard sense <cit.>. We then takeAas a transverse vector, having the property∇⃗·A = 0. We highlight that these definitions are general and true in a cavity setup or in any other confined geometry.Using these definitions for the electric field we arrive at rewriting the Maxwell equations in only one equation- ∇^2 E^⊥ + 1/c^2∂_t^2 E^⊥ =- 1/c^2ϵ_0∂_t^2 [ P_ tot + ∇⃗( G⋆∇⃗·P_ tot )].The part in square brackets on the left-hand side is the transverse projected polarizationP_ tot^⊥ = P_ tot + ∇⃗( G⋆∇⃗·P_ tot ),with the property∇⃗·P_ tot^⊥ = 0.While the longitudinal projection isP_ tot^∥ = - ∇⃗( G⋆∇⃗·P_ tot ) In order to understand the properties of the resulting electric field, we need to specify the dynamics of our matter system. Following the standard literature <cit.> we consider an equation of motion for the polarization of each constituent of the system∂_t^2 P_a +L_a(P_a ) = Ω_a^2ϵ_0E (r_a ).HereL_a(·)is a linear differential operator defining the dynamics of the polarization of each constituent,Ω_ais its Rabi frequency (or, equivalently, plasma frequency).Using all the definitions of the electric field introduced before we arrive to∂_t^2 P_a +L_a(P_a ) = -Ω_a^2∑_b P_b^∥ (r_a) + Ω_a^2ϵ_0E^⊥(r_a)- ∇^2 E^⊥ + 1/c^2∂_t^2 E^⊥ =- 1/c^2ε_0∂_t^2 ∑_a P_a^⊥(r ). We now specialize in a cavity system, truncating the description to the only TM_0modes <cit.>. One can directly check that the transverse projector (or delta transverse) is given by[ δ^⊥_ij(r , r')]_ TM_0 = 1/L_cδ_zjδ(r_∥ - r_∥' ),hereδ_zjis the Kronecker delta that selects the polarization direction only along thez-direction, which, in our convention, is the direction perpendicular to the parallel cavity plates. As a consequence, the longitudinal delta in the TM_0mode is given by[δ^∥_ij(r , r')]_ TM_0 = δ_zjδ(r_∥ - r_∥' ) δ(z-z') - 1/L_cδ_zjδ(r_∥ - r_∥' ).Notice that, whenz≠z'[δ^∥_ij(r , r')]_ TM_0 = - [ δ^⊥_ij(r , r')]_ TM_0. Here we further specialize in the case of multiple infinite slabs, localized at differentz-positions. Assuming harmonic dynamics,L_a(P_a )=ω_a^2 P_a, the equations in the TM_0subspace ink-space (in-plane) and frequency domain reduce to(ω_a^2 - ω^2 ) P_a, k = Ω_a^2∑_b≠ aP_b, k + Ω_a^2ϵ_0 E_ TM_0, k(c^2 k^2 - ω^2) E_ TM_0, k = ω^2/ϵ_0∑_a P_a, k.For simplicity from here on we suppress the vectorial notation onk. §.§ Effective dielectric permittivityTo solve the equations (<ref>)-(<ref>) we takeP_a, k = ϵ_0 E_ TM_0, k∑_bΠ_abΩ_b^2.The dipole susceptibility is given byΠ_ab = ℳ_ab^-1, where the dynamical matrix isℳ = [ ω_1^2 - ω^2-Ω_1^2-Ω_1^2 ...;-Ω_2^2 ω_2^2 - ω^2-Ω_2^2 ...;-Ω_3^2-Ω_3^2 ω_3^2 - ω^2 ...; ... ... ... ...; ] From these definitions, we find the relative electric susceptibility of the ISB multi-slabs setup asϵ_r(ω) = 1 + ∑_a,bΠ_abΩ_b^2.The cavity transmission in Eq. (<ref>) is equivalently given byT_c(k, ω) = γω_k/c^2k^2 - ω^2ϵ_r(ω) - iγω + γ^2/4.For example, in the specific case of two slabs (emittereand dresserd) we haveℳ = [ ω_ d^2 - ω^2-Ω_ d^2; -Ω_e^2ω_e^2 - ω^2;]and the dipole susceptibility is given byΠ =1/(ω_ d^2 - ω^2)(ω_e^2 - ω^2)-Ω_ d^2Ω_e^2××[ω_e^2 - ω^2 Ω_ d^2;Ω_e^2 ω_ d^2 - ω^2;]As a consequence, the relative permittivity is given byϵ_r(ω) = 1 + Ω_ d^2(ω_e^2 - ω^2) + Ω_e^2(ω_ d^2 - ω^2) + 2Ω_ d^2Ω_e^2/(ω_ d^2 - ω^2)(ω_e^2 - ω^2)-Ω_ d^2Ω_e^2It is worth noticing that when we take the limit of small Rabi (plasma) frequencies for the slabs,Ω_d,e ≪ω_d,ewe recover the usual relative permittivity for a couple of independent emittersϵ_r(ω) ≈ 1 + Ω_ d^2/ω_ d^2 - ω^2 + Ω_e^2/ω_e^2 - ω^2.§.§ Spontaneous emission as classical electromagnetic damping Here we draw a connection between the modified Purcell spontaneous emission described in the maintext and the electromagnetic damping arsing in the classical equations (<ref>)-(<ref>).We start by considering the equations for the dresser and the TM_0electric field mode(ω_ d^2 - ω^2) P_ d - Ω_ d^2ϵ_0 E_ TM_0 = Ω_ d^2 P_e , (c^2k^2 - ω^2) E_ TM_0 - ω^2/ϵ_0 P_ d = ω^2/ϵ_0 P_e ,coupled together to the emitter by(ω_e^2 - ω^2)P_e = Ω_e^2( ϵ_0 E_ TM_0 + P_ d).In this last equation we immediately recogniseD = ϵ_0 E_TM_0 + P_d, which is the only relevant degree of freedom that couples to the emitter. In this section, to simplify the notation, we completely suppress the indexkunless necessary.By solving Eq. (<ref>) we find thatϵ_0 E_ TM_0= χ_e | E(ω ) P_e P_ d= χ_e| d(ω ) P_ewhere the two susceptibilities are defined byχ_e | E(ω )= ω^2(ω̅_ d^2 - ω^2)/(c^2k^2-ω^2)(ω_ d^2-ω^2) - ω^2 Ω_ d^2 = ω^2 ω_ up^2 + ω_ lp^2 - c^2k^2 - ω^2 /(ω_ up^2 - ω^2) (ω_ lp^2 - ω^2), χ_e| d (ω ) = Ω_ d^2c^2k^2 /(c^2k^2-ω^2)(ω_ d^2-ω^2) - ω^2 Ω_ d^2 = Ω_ d^2c^2k^2 /(ω_ up^2 - ω^2) (ω_ lp^2 - ω^2),whereω̅_d^2 = ω_d^2 + Ω_d^2. The emitter dynamics can be rewritten exactly as(ω_e^2 - ω^2)P_e = Ω_e^2 [ χ_e | E(ω ) + χ_e| d (ω ) ]P_e Now we consider that the poles ofχ_e | E(ω),χ_e|d(ω)are lifted by the respective polariton linewidthsγ_up,lp. This is implemented by replacingω^2⟼ω^2 + iγ_up,lpωin the denominator of the two susceptibilities. In this way we can specialize on the Purcell regime whereγ_up,lp ≫Ω_e.Joining this condition with the assumption that the emitter is almost resonant with one of the polaritons, for instanceω_e ≃ω_up, we can expand the emitter dynamics and the susceptibilities around this pole, obtainingχ_e | E(ω ) ≈i/γ_ upω_ upω_ up, lp^2 - c^2k^2/ω_ up^2 - ω_ lp^2χ_e| d (ω ) ≈i/γ_ upΩ_ d^2c^2k^2/ω_ up1/ω_ up^2 - ω_ lp^2,from which(ω_e^2 - ω^2)P_e^+ ≈ 2ω_e (ω_e - ω) P_e^+ = Ω_e^2 [ χ_e | E(ω ) + χ_e| d (ω ) ]P_e^+ ≈ i Ω_e^2/γ_ upω_k^2/ω_ upω_ lp^2 - ω_k^2/ω_ lp^2 - ω_ up^2 P_e^+ ,where we adopt the notationP_e^+to indicate that this equation describes only the dynamics ofP_earound the poleω≃+ω_e ≃+ω_up. Rearranging the terms and taking the inverse Fourier transform, we finally findsi∂_t P_e^+ = ω_e P_e^+ - i Ω_ up^2/2γ_ upP_e^+One can repeat the reasoning for all the polesω≃- ω_up,ω≃±ω_lp, obtaining the same type of result.ReinterpretingP_e^+asb_kwe obtain the equation shown in the maintext.the spontaneous emission that is normally found in radiation reaction calculations in the presence of a continuum of available final photonic modesis replaced here by the additional anticrossing between the emitter and the polariton branches. Its magnitude... Not surprising... FD relations between fluctuations (hbar) and dissipation...More genuine quantum physics more complex processes... 3 wave mixing...In analogy to the radiative reaction picture of spontaneous emission <cit.>, this formulation provides a classical interpretation for the vacuum-induced modification of basic light-matter interaction processes.More in specific, it is interesting to note that...This expression manifestly differs from a simple sum of the individual slabs' permittivities,ϵ_r(ω) ≈1 + Ω_d^2/(ω_d^2 - ω^2) + Ω_e^2/(ω_e^2 - ω^2), still it can be justified (Sec.<ref> of the SM) in terms of a classical electrodynamic model of the device. Recent works on cavities involving several emitters have pointed out the important role of electrostatic interactions between the emitters, often neglected in standard cavity QED treatments. Even though are not visible in the dipole-gauge Hamiltonian description of Eq. (<ref>), they naturally appear <cit.> at the level of the equation of motions for the field operators.After a lengthy but straightforward algebra reported in Secs.<ref>-<ref> of the SM, these can be summarized in a dispersion relation of the formω^2 ϵ_r(ω)=c^2 k^2with the effective dielectric permittivityϵ_r(ω) = 1 + Ω_ d^2(ω_e^2 - ω^2) + Ω_e^2(ω_ d^2 - ω^2) + 2Ω_ d^2Ω_e^2/(ω_ d^2 - ω^2)(ω_e^2 - ω^2)-Ω_ d^2Ω_e^2 = depolarization shift.This expression manifestly differs from a simple sum of the individual slabs' permittivities,ϵ_r(ω) ≈1 + Ω_d^2/(ω_d^2 - ω^2) + Ω_e^2/(ω_e^2 - ω^2), still it can be justified (Sec.<ref> of the SM) in terms of a classical electrodynamic model of the device. It may seem surprising that this classical treatment is able to quantitatively reproduce the Rabi splitting and the polariton linewidth effects that we have originally interpreted in a quantum language. This agreement is however natural once one realizes that the quantum fluctuations are directly related to a response function via the fluctuation-dissipation theorem and that for quadratic Hamiltonian like our (<ref>)The fact that emission processes such as the vacuum-field Rabi oscillation and spontaneous emission that underlieinvolved in the can be quantitatively described It was pointed out recently <cit.> that the role of vacuum effects may be marginal in most cases, completely covered by the stronger and more impacting role of the electrostatic interactions, often neglected in standard cavity QED descriptions.We expect that the strength of the electrostatic forces between the two slabs (emitter and dresser) scales with the product of their densities which is linked to their bare Rabi couplings through Eq. (<ref>),n_d×n_e ∼Ω_d×Ω_e.This means that when the cavity-dresser coupling is large, the electrostatic force between the two dielectric slabs is strong.However, in our Hamiltonian description in Eq. (<ref>) there is no trace of direct interactions between the dresser and the emitter and one may wonder where are these forces in our model.The answer is hidden in the dipole-gauge Hamiltonian, as pointed out in several textbooks <cit.>. When the equations of motion are derived it is clear that they correctly match the classical theory of dielectric, correctly including the electrostatic forces, as one can see from App. <ref>-<ref>. In these regards all the spectra derived along the article in Fig. <ref>-<ref>, and consequently the emitter-polariton Rabi splitting described in Eq. (<ref>) and represented in Fig. <ref>, can be re-derived by considering the relative dielectric permittivity for two dielectric slabsϵ_r(ω) = 1 + Ω_ d^2(ω_e^2 - ω^2) + Ω_e^2(ω_ d^2 - ω^2) + 2Ω_ d^2Ω_e^2/(ω_ d^2 - ω^2)(ω_e^2 - ω^2)-Ω_ d^2Ω_e^2As detailed in App. <ref>, in the dresser USC regime,Ω_d ≳ω_d, this dielectric permittivity is not simply the sum over the two individual slab's permittivityϵ_r(ω) ≈1 + Ω_d^2/(ω_d^2 - ω^2) + Ω_e^2/(ω_e^2 - ω^2), but rather a mix between the two, controlled by the combined parameterΩ_d×Ω_e, which, as expected, represents the electrostatic forces mediated by the TM_0mode. | http://arxiv.org/abs/2312.16287v1 | {
"authors": [
"Daniele De Bernardis",
"Gian Marcello Andolina",
"Iacopo Carusotto"
],
"categories": [
"quant-ph",
"cond-mat.mes-hall"
],
"primary_category": "quant-ph",
"published": "20231226190008",
"title": "Light-matter interactions in the vacuum of ultra-strongly coupled systems"
} |
[ [ January 14, 2024 ==================== In this work, we propose a control scheme for power grids subject to large perturbations that cause the failure of a node of the grid. Under such circumstances, the system may lose synchrony and, in addition, a cascade of line failures can be triggered as an effect of the flow redistribution that activates the protection mechanisms equipped on each line of the grid. To devise a control action for addressing this problem, we adopt a multi-layer network-based description of the power grid that incorporates an overflow condition to model the possibility of cascading failures. The two other layers of the structure are devoted to the control, one implements the distributed proportional control law, and the other the integral control law. To exemplify the application of our model, we study the Italian high-voltage power grid for different parameters and topologies of the control layers. Keywords Power grids. Swing equations. Control of complex networks. Cascading failures.§ INTRODUCTION Synchronization in power grids has become a classical example of the dynamics appearing in a system that can be modeled as a set of coupled nonlinear oscillators <cit.>. Such modeling provides a mathematical framework that, although not capturing all the multifaceted aspects required by a very detailed description of the power system, subsumes the main characteristics of the synchronization phenomenon in a tractable way <cit.>. This approach, which has been pursued in many works in the field of nonlinear dynamics and control theory,provides valuable theoretical insights on the phenomenon, allowing to characterize, for instance, the transition from the synchronous to the incoherent state (and vice-versa), stability issues, and the classes of natural frequency distributions which lead to synchronization <cit.>.Stable operation of power grids is achieved by maintaining a synchronous state in the entire network. Since power grids may be subjected to many different types of perturbations, the stability of the synchronous state is of utmost importance and many papers have investigated it. In particular, several analytical results have been obtained for network-based power grid models. For instance, a detailed stability analysis was performed in <cit.> and in <cit.> for networks of classical Kuramoto oscillators (i.e., without inertia) with different topologies (namely fully coupled networks in <cit.> and planar graphs in <cit.>). Regarding networks of rotators (i.e. Kuramoto oscillators with inertia), a stability analysis has been presented in <cit.>, for globally coupled networks and chain structures. In particular, the stability analysis performed in <cit.> for networks with inhomogeneous damping but identical inertia has been extendedto the case of inhomogeneous inertia and damping in <cit.>. Adetailed stability analysis of a population of N heterogeneous rotators, randomly connected, has been reported in <cit.>, showing that stable and unstable solutions can be found before stabilizing the unstable ones by a control loop.The behavior of a power grid in presence of large perturbations that can eventually yield to failures that propagate along the structure is more difficult to study with analytical techniques. However, understanding cascading failures and devising mechanisms for their control is fundamental for their economic and social impact. The circumstance of cascading failures, as induced for instance bya fault localized in a line of the grid triggering subsequent failures, has been modeled with diverse approaches. These approaches can be classified into network-based structural methods; techniques based on either DC or AC power flow calculations; or models explicitly incorporating the dynamics of the power grids. In the first approach only the structural properties of the network of interconnections are taken into account to investigate the phenomenon of cascading failures <cit.> and devise strategies for their mitigation <cit.>. On the contrary, the second class of approaches relies on the calculation of the power flows from either DC or AC equations in order to provide a simple but tractable description of the electrical phenomena taking place in power grids<cit.>. The third class of approaches makes use of models with an explicit electro-mechanical description of the dynamics of power grids. In this case, simplified models are often adopted to obtain a representation of the power grid as a system of coupled oscillators, that could enable the use of techniques and tools from nonlinear dynamics, network science and control theory <cit.>. Although these models do not provide a very detailed description of a power system, they crucially incorporate the main characteristics of the dynamics of the generators, the loads and the mechanisms for line shut down, as the transient dynamics can induce failures not present in the quasi-static approximation. To this purpose, either models based on a structure preserving <cit.> or synchronous machine <cit.> description of the power grid can be used. The problem of analyzing and preventing cascading failures is even more relevant in grids with high penetration of variable renewable energy sources <cit.>. In two recent works <cit.>, two complementary problems in control of power grids subjected to faults have been investigated. In <cit.>, the goal of the control is to reduce the deviation from synchronization in case of faults perturbing the network dynamics. To achieve this goal, the power grid is represented as a multi-layer system <cit.> made of two layers: the first layer represents the physical layer where the electro-mechanical phenomena of the power grid take place, while the second layer acts as control.On the contrary, in <cit.> the problem considered is the mitigation of cascading failures, induced by the dynamical evolution of the grid after a fault due to some exogenous event. The problem is addressed employing, also in this case, a two-layer representation of the power grid, where, this time, the control layer takes as input the instantaneous oscillation frequency rather than its integral. Following the formalism introduced in <cit.>, the first approach can be mapped into a layer of distributed integral controllers, while the second one into a layer of distributed proportional controllers. However, the combined use of the two control layers remains unexplored. Our paper aims at filling this gap, taking into account that proportional and integral control actions (along with the derivative mode) are widely and successfully employed in the form of the so-called PID (proportional-integral-derivative) controller since more than 70 years in many industrial applications <cit.>. Although these mainly refer to controllers using a single output measurement and a single input actuator, in recent works this paradigm has been extended to distributed controllers <cit.>.Motivated by these considerations, in this paper we investigate the combined use of the two layers independently investigated in <cit.> and in <cit.> to control deviation from synchronization in presence of large perturbations leading to node removal, while simultaneously mitigating the onset of cascading failures. We adopt the multi-layer representation of the power grid, but consider three layers rather than two. The first layer represents the physical layer and is modeled with a set of swing equations (second-order Kuramoto oscillators) that also incorporate an overflow condition to take into account the intentional shut-down of a line to prevent overheating<cit.>. The other two layers of the multi-layer network implement proportional and integral distributed control.Our results show that a complex interplay between the topology of the layers and the system parameters takes place, yielding scenarios where either the two layers act in a synergistic or in an antagonistic way. In addition, we find that it is difficult to derive general guidelines for the tuning of the parameters of the distributed controllers, such that this step must be accomplished by producing, for the power grid under investigation, a map of the system behavior as function of the gains of the two control layers.The rest of the paper is organized as follows. In Sec. <ref> the model of power grid is illustrated. In Sec. <ref> the analysis of the Italian high-voltage power grid is discussed. In Sec. <ref> the conclusions of the paper are drawn.§ MULTI-LAYER MODEL OF THE POWER GRID §.§ Physical layer For the physical layer, we adopt a synchronous machine model <cit.> that incorporates an overflow condition eventually triggering cascading failures <cit.>. According to this model, each node is associated with a rotating machine whose dynamics is described by a swing equation. Let 𝒩 be the set of nodes with |𝒩|=N, 𝒩_g (with |𝒩_g|=N_g) the subset of generator nodes, and ℰ (with |ℰ|=E) the set of links describing the lines that connect the units of the power grid. To each node i with i=1,…,N, one associates a mechanical rotor angle θ_i(t), which corresponds to the voltage phase angle, and its angular velocity ω_i=dθ_i/dt, relative to a rotating reference frame with velocity Ω=2π f (f=50Hz or f=60Hz, depending on the geographical area under study). The dynamics of these variables are described by the swing equations <cit.>:[dθ_i/dt=ω_i; I_id ω_i/dt=P_i-γ_iω_i+∑_(i,j)∈ℰ' K_ijsin(θ_j-θ_i) ] where i=1,…,N. The parameters I_i, γ_i, and P_i represent the rotating machine inertia, damping coefficient, and power. The power can be either positive, P_i>0, for nodes that act as generators, injecting the power into the system, or negative, P_i<0, for nodes that act as loads, absorbing the power from the system. Here ℰ' ⊆ℰ represents the set of the operating (i.e., not failed) links of the power grid. The parameters K_ij are the elements of the weighted adjacency matrix describing its topology, and are related to the electrical quantities characterizing the nodes by the relationship K_ij=B_ijV_iV_j where B_ij is the susceptance between nodes i and j, and V_i and V_j are the voltage amplitudes. Eqs. (<ref>) hold under several assumptions that allow to simplify the power flows equations governing the electrical system. In particular, the voltage amplitudes V_i are assumed to be constant, the ohmic losses negligible, and the variations in the angular velocities, ω_i, small compared to the reference Ω.Important quantities to determine the occurrence of failures due to line overloads are the flows that are defined as follows:F_ij(t)=K_ijsin(θ_j(t)-θ_i(t)) ∀ (i,j) ∈ℰ. The maximum flow that a line can accommodate is F_ij=K_ij. Since ohmic losses induce overheating in the lines, connections are shut down when the flow exceeds a fraction α∈ [0,1] of its maximum, which corresponds to set the line capacity as C_ij=α K_ij, where α is a tunable parameter of the model. Hence, the overload condition for a generic line (i,j) is given by:|F_ij(t)|>C_ij=α K_ij When this condition is met at some time, then the line is shut down. This results in a change of the topology that elicits a further dynamical redistribution of the flows and eventually triggers other line protection mechanisms, yielding a cascading failure. The purpose of the control layers, described in the next sections, is to prevent these failures, while maintaining synchronization. §.§ Control layers In our multi-layer structure, control is implemented in two layers whose nodes are in one-to-one corrispondence with the units of the physical layer. The overall structure is thus composed of three layers, as schematically shown in Fig. <ref>. The control input can be viewed as the signal associated to the inter-layer link between corresponding nodes of the multi-layer networks. Intra-layer links of the physical layer, instead, represent the lines of the grid (that is, the channels through which energy exchange between two power units occurs). Finally, the intra-layer links of each control layers represent flows of information to build the control law.In the presence of the two control layer, the equations describing the dynamics of the power units can be written as <cit.>: [dθ_i/dt=ω_i;I_id ω_i/dt= P_i-γ_iω_i+∑_(i,j)∈ℰ' K_ijsin(θ_j-θ_i)+u_i ] The term u_i(t) represents the control input, and is here obtained as the sum of two contributions: u_i=u_i^P+u_i^I The term u_i^P(t) is a distributed action obtained by setting for each link a control law proportional to the difference of the frequencies at the extremes <cit.>: u_i^P=G_Pξ_i^P∑_j=1^N a_ij^p(ω_j-ω_i) The term u_i^I is also a distributed action, that, however, implements an integral action, defined by specifying the dynamics of the term itself <cit.>:u̇_i^I=G_Iξ_i^I∑_j=1^N a_ij^I(ω_j-ω_i) In both Eq. (<ref>) and (<ref>), ξ_i^P and ξ_i^I are binary variables, allowing to select which units of the power grid are subject to control, that is, ξ_i^P=1 (ξ_i^I=1) if node i is controlled by a proportional (integral) control action, andξ_i^P=0 (ξ_i^I=0) otherwise. With the term pinning control we refer to the case when not all nodes are controlled, and we call pinned nodes the nodes for which ξ_i^h=1 with h={P,I}. G_P and G_I represent the gains of the two layers, and a_ij^P and a_ij^I are the coefficients of the adjacency matrices A^P={a_ij^P} and A^I={a_ij^I} encoding the topologies of the two layers.In summary, the dynamics of the power grid with the control law (<ref>) is described by: [dθ_i/dt=ω_i;I_id ω_i/dt= P_i-γ_iω_i+∑_(i,j)∈ℰ' K_ijsin(θ_j-θ_i); +G_Pξ_i^P∑_j=1^N a_ij^P(ω_j-ω_i)+G_Iξ_i^I∑_j=1^N a_ij^I(θ_j-θ_i) ] where i=1,…,N. Here we note that the control signal can be interpreted as power injection for positive u_i or power absorption for negative values of u_i, which, for loads, can be obtained by modulating the effective power associated to the bus and, for generators, can be realized using storage devices (e.g., batteries) that can absorb or inject power to the bus <cit.>.§.§ Case study and measures characterizing the system behavior We focus our analysis on the case study of the Italian high-voltage (380kV) power grid <cit.>. Available data on the structure of this power grid are used to set the topological characteristics of the physical layer. The network is assumed to be homogeous and undirected, that is, K_ij=K_ji=Ka_ij, where a_ij are the coefficients of the adjacency matrix modeling the lines of the power grid, i.e., a_ij=1 if the power units i and j are connected by a power line, anda_ij=0 otherwise. The network contains N=127 nodes (34 generators and 93 loads) and L=171 links. The parameters of the model have been selected based on previous works <cit.> and in order to meet the condition that the network is synchronized in the absence of faults. In more details, we set as I_i=I=10 ∀ i, γ_i=γ=1 ∀ i, K=11, P_i=-1 for the load nodes, and P_i=2.735 for the generation nodes as in <cit.> such that the network is balanced, i.e., ∑_i=1^N P_i=0. This condition is a pre-requisite for synchronization. Finally, the parameter α has been set as α=0.8 <cit.>.For the other two layers used in Eqs. (<ref>) we tested the effect of several topologies, as described in the following sections. In order to monitor the behavior of the multi-layer model of the power grid, we use several indicators to measure synchronization and the number of failed lines. First of all, we consider the Kuramoto order parameter R(t) defined as follows: R(t)e^iΦ (t)=1/N∑_j=1^N e^iθ_j(t) This parameter takes values in [0,1], with values close to one indicating that the phases are synchronized, and values close to zero denoting the absence of phase synchronization. Likewise, it is important to monitor not only the level of phase synchronization, but also that of frequency synchronization. To this aim, we consider the standard deviation of the instantaneous frequencies: Δω (t)= √(1/N∑_j=1^N(ω_j(t)-ω̅(t))^2) where ω̅(t)=1/N∑_i=1^N ω_j(t) is the instantaneous average frequency of the power units. The parameter Δω (t) provides information about the deviation from complete frequency synchronization, with values close to zero indicating that all nodes of the grid oscillate at the same frequency. Alongside with this parameter, we also monitor the value of the frequencies ω_j, with j=1,…,N, as under normal operation of the power grid these quantities must be zero or very close to zero.To provide a more comprehensive understanding of failures under different control gains we also report the power loss P of all loads, whichrepresents the difference between the initial total power of all load nodes and the total effective power of all load nodes at time t>0: P=1/𝒩_l∑_i^𝒩_l(P_i+u_i)- 1/𝒩_l∑_i^𝒩_lP_i ,where 𝒩_l is the total number of loads.Finally, to monitor the lines that are not operative in the system, or, alternatively, those that are active, it is useful to consider two different measures. We indicate with n_c the number of lines failed during the window of time where Eqs. (<ref>) are simulated. In more detail, for each line of the power grid, we calculate the flow in the line and check condition (<ref>) at each time t. If, at some time, the flow exceeds the maximum capacity, then the line is shut down for the rest of the simulation and, hence, is considered in the count of failed lines. However, in our analysis we will start from a fault located in one of the power grid units and assume that this fault removes the nodes and all its lines from the power system. The lines removed in this way are not shut down because of an overflow and are, therefore, not counted in n_c. To take into account also the failures of these lines, when appropriate, we will consider the number of active links.§.§ Topology of the layers For the sake of clarity, we focus our analysis on the Italian high-voltage power grid, even if our approach can be extended to other physical layer topologies. Instead, we emphasize different control layer topologies, as they play a crucial role in our investigation. The distributed nature of the controllers, which gather information from neighboring units to determine the appropriate control action at each node, make this aspect particularly significant.For what concerns the proportional layer, we have considered that all nodes are controlled, namely ξ^P_i=1, ∀ i. For the layer topology, which rules how the proportional controllers are connected each other (or, equivalently, which information are available at each node of the layer), we have analysed a connectivity identical to that of the physical layer and some random networks, that for simplicity we have generated using the Erdös-Rényi (ER) model <cit.>. As discussed in Sec. <ref>, for the control layer we first carry out a preliminar analysis to check whether the layer, in the absence of the integral control, can prevent cascading failures triggered by failures in any of the power system units.In the case of integral control, only the generator nodes are controlled, that is, ξ^I_i=1 for i∈𝒩_g. The topology of this layer is obtained starting from an existing network in two different ways. Let us consider the adjacency matrix A̅ of a given graph, whose nodes are the units of the power grid. From this network, we extract the subgraph where the nodes are all the generators of the power grid and their first neighbors, and the edges are the connections among these nodes. We name this network as the local network extracted from A̅ and indicate its adjacency matrix as A^loc(A̅). The elements of this matrix are given by: a^loc_ij(A̅)=a̅_ij , if i ∨ j ∈𝒩_g 0, otherwise . In the second case, in addition to the links obtained in this way, we also consider each possible connection between any pair of generators. The network obtained in this way, that we call extended topology, has adjacency matrix indicated as A^ext(A̅), whose elements are given by: a^ext_ij(A̅)= 1, i∧ j∈𝒩_ga^loc_ij(A̅), otherwise . In the following, for the integral layer we will analyse local topologies obtained from either the physical or the proportional layer connectivity, namely A^I=A^loc(A) or A^I=A^loc(A^P) as well as the extended topology obtained from the physical layer connectivity, namely A^I=A^ext(A). For the sake of illustration, in Fig. <ref>(b) we show the local topology obtained from the Italian high-voltage power grid, with adjacency matrix A^I=A^loc(A), while in Fig. <ref>(c) we show the extended topology, with adjacency matrix A^I=A^ext(A). § RESULTS §.§ Control of cascading failuresIn this section we study the control of cascading failures that can be triggered by a fault in one node of the grid when only the proportional layer is active, i.e., G_I=0 in Eqs. (<ref>). Although the structure of the control layer is similar to the one considered in <cit.>, the problem here investigated is more general and, often, more complicated, due to the fact that the initial fault is located in a node of the grid, rather than in a line. Indeed, we will show that its solution requires the use of a topology for the proportional control layer different than that of the physical layer.Preliminary to the analysis of the effect of the proportional control layer, we have studied the model in Eqs. (<ref>) in the absence of any control, i.e., G_P=G_I=0. The topology of the physical layer is given by the Italian high-voltage power grid described in Sec. <ref>. The same parameters are used for all generators and loads, i.e., γ_i = γ, ∀ i (with γ=0.1), I_i = I (with I = 1). In addition, we set the same susceptance for all edges, i.e., K_i,j = K (K=11). For each node of the power grid, we have considered a fault located in the node, by removing the node and all its links, and simulated Eqs. (<ref>), using condition(<ref>) to check if at some time there are lines that fail. We have then counted the number of failed nodes, n_c, and named as critical those nodes for which n_c ≠ 0. We have found that, under these conditions, there are 24 critical nodes in the power grid.In <cit.> it is shown that failures triggered by an initial fault in one of the grid lines can be controlled by a proportional layer having the same connectivity of the physical one. Motivated by these results, as a first case study here we have considered the same assumption for the topology of the control layer. We have thus carried out numerical simulations of Eqs. (<ref>) for ξ^P=1 ∀ i=1,…,N and different values of G_P. For each of the twentyfour critical nodes of the power grid, determined with the previous analysis, we have calculated n_c as a function of the gain G_P of the control layer, when the initial fault is located in the considered node. The results are reported in Fig. <ref>a, which shows that there are thirteen nodes for which the cascading failure cannot be controlled, even when using a large value of G_P. Hence, although for many nodes this approach is effective, it is not for all nodes of the grid. To investigate whether this is due to the control law itself or to the topology of the control layer, we have then repeated the analysis for different configurations of the control layer. In particular, we have considered a control layer where the links among nodes are produced by the ER model, with each pair of nodes being connected with probability p. We have found that, even at low values of p, this approach is effective to control cascading failures triggered at any node of the power system. Fig. <ref>b shows the result for a network obtained with p=0.04, for which cascading failures are prevented for all critical nodes by setting a large enough value of G_P. Similar results are found for other ER networks with the same value of p. The value of p itself does not appear to be a critical parameter, as long as it ensures a sufficient connectivity among the nodes. §.§ Control of synchronization and cascading failures As shown in the previous section, in the presence of the proportional layer only, a larger number of cascading failures can be controlled either by increasing G_P or choosing a proper control topology, different from the physical one. This analysis does not consider the level of synchronization, as for instance measured by the parameter Δω.In order to keep a high level of network synchronization while controlling the cascading failures, we resort to an additional control strategy, corresponding to an additional control layer. However, the response of a complex system to a node perturbation is a non-trivial phenomenon due to the interplay of many factors that concur to determine it, such as the typology and duration of the perturbation, the weight of the edges (in this case representing the susceptance of the power lines), the connectivity of the perturbed node, the topology of the physical layer, the topology of the control layers, and many others. This has two consequences. On the one hand, the inclusion of another layer, and in particular a dynamical one as the integral layer, makes possible to observe cascading failures also initiated by those nodes not classified as critical in the absence of the integral layer. On the other hand, the interplay among all these factors makes extremely complicated to exhaustively investigate the role played by each of them. For this reason, in this manuscript we limit ourselves to exemplifying some typical scenarios that are observed, considering different topologies for the control layers. Regardingthe topology of the proportional control layer, we consider two cases: i) an ER network generated with connection probability p=0.04; ii) the Italian high-voltage power grid. Regarding the integral control layer, we consider three different topologies that are built either from the network backbone of the Italian power grid or from the ER network generated with connection probability p=0.04, as detailed in Sec. <ref>.We start our analysis by considering the dynamics emergent in the system when the initial fault (causing node removal along with its edges) is located in node 24. Node 24 is a generator connected to three other units, and its removal causes the failure of only these three lines. We have investigated, for a wide range of the control parameter values, the effectiveness of the control schemes, varying both the topology of the proportional control layer and the integral control layer. In particular in Fig. <ref>, we have varied the integral control layer topology, while keeping fixed the proportional layer topology, chosen to be the ER network with p=0.04 introduced in Sec. <ref>. The figure illustrates the system behavior when the node is first removed from the grid and then reconnected to it at the end of the perturbation, as a transient dynamics eventually leading to cascading failures can be elicited either by the removal or reactivation of the node.Using the extended control topology A^I=A^ext(A) for the integral control layer, our approach is able to fully prevent cascading failures occurring when the node is removed from the grid (panel a). When the node is reconnected to the network (panel b), the control works almost everywhere in the parameter regions considered. Moreover, this control guarantees a high level of synchronization, as the standard deviation Δω remains almost zero both during the perturbation (panel c) and when the perturbation stops and the node is reconnected to the grid (panel d). When the local topology, i.e., A^I=A^loc(A) is used in the integral control layer, when the perturbation is effective (panel e) we observe cascading failures for small values of G_P and large values of G_I. They can be successfully controlled using large values of G_P and small values of G_I. This is also evident in the behavior observed after reconnecting the perturbed node to the grid (panel f), that shows how the red region corresponding to full control of cascading failures shrinks to the upper left corner of figure. The level of synchronization is poorer than in the previous case, both during the perturbation (panel g) and when the perturbation is over (panel h). In particular, lower levels of synchronization are observed for smaller values of G_P. Finally, if the topology for the integral control layer is that induced by the proportional layer, i.e., A^I=A^loc(A^P), we observe similar effects with respect to mitigation of cascading failures (panel l is comparable to panel f even if the red region shrinks in panel l), while synchronization control is more effective, even at small values of G_P (see panels m,n).To better illustrate the role played by each control layer separately, we now focus on the time evolution of the system at selected values of the control gains G_P and G_I. In Fig. <ref> we show the time evolution of one of the best cases, according to Fig. <ref> (panels l and n), and compare it to the cases when only one mode of the controllers is active. The perturbation is applied in the time interval t∈ [200, 1200], during which the perturbed node is disconnected in the physical layer. When both controls are present (light blue curves), cascading failures are prevented and, once the node is reconnected, all links are active (panel e). At the same time, Δω≈ 0 during all the simulation time interval (panel d), while, when the perturbation ends, the Kuramoto order parameter reaches values of R (R≈ 0.6) slightly higher than those before the perturbation (panel a). If only the integral control is applied (black curves), the level of synchronization in the network is higher, on average, during the perturbation (panel a), but we observe a large number of failed lines during the perturbation (n_c=15) that prevents the system from recovering when the perturbation ends (panel e). Finally, when the integral control is turned off, while keeping the proportional control on (orange curves), we obtain a low level of synchronization (R≈ 0.24 in panel a) during all the time window and we observe a larger power loss (panel b), when the perturbation is active, with respect to the other cases. Finally, it is evident the role played by the proportional control in preventing the cascading failures, since all lines are active when the perturbation ends (panel e).We move to discuss the scenario where the topology of the proportional layer is the same of the physical layer. We have found that, under such conditions, controlling the cascading failures becomes more challenging, regardless of the selected integral layer topology. In Fig. <ref> we analyse this scenario for different values of G_P and G_I, when the integral layer has a global (panels a-d) or local (panels e-h) topology, i.e., A^I=A^ext(A) or A^I=A^loc(A) respectively. In comparison with the corresponding panels of Fig. <ref>, it turns out that using a local topology A^I=A^loc(A) yields a lower level of synchronization (panels g, h) and a smaller region where cascading failures are fully controlled (panel f).Also for this scenario, we show the time evolution of some quantities of interest in Fig. <ref> for selected values of G_P and G_I. In particular, we consider an optimal choice of these parameters (panels b and d in Fig. <ref>), and compare the results with the cases when only one mode of the controllers is active. In all cases, we observe the emergence of oscillating dynamics in the order parameter R (see Fig. <ref>). The macroscopic oscillations in R suggest the presence of clusters of whirling oscillators <cit.>, that relegate the system in a partially synchronized state. When only the proportional control is active (orange curves), the level of synchronization is low (R≈ 0.25), the power loss is higher (panel b), but cascading failures are prevented (panel e). On the other hand, when only the integral control is active (black curves), during the onset of the perturbation, a cascading failure is observed and, at the same time, the standard deviation Δω deviates from zero (panel d), thus resulting in chaotic dynamics of R (panel a). When both control systems are active (light blue curves), the best performance in terms of synchronization level (panel a) and number of active lines (panel e) is obtained.Another important case study arises when the cyber-layers fail together with the physical nodes, i.e. perturbations are extended to the control layer nodes associated to the physical ones. To investigate this case study, we have carried out a series of simulations, performed under the same conditions as those in Figs. <ref>, <ref>. In this case, when the physical node is disconnected, the cyber-nodes controlling the perturbed physical node and the related connections are disabled. Numerical simulations reveal an equivalent scenario (results not shown). In more detail, disconnecting the controlling nodes together with the physical node enhances the possibility to control the cascading failures when we implement a local topology in the integral layer (A^loc(A) or A^loc(A^P)), for both considered topologies in the proportional layer.However, disconnecting the control nodes induces a higher level of desynchronization during the perturbation.So far, we have considered the situation where, in the proportional control layer, both generators and loads are controlled, while, in the integral control layer, only generators are controlled. We now study the application of pinning control in both control layers. The results are illustrated inFigs. <ref>, <ref>, and <ref>.In Figs. <ref> and <ref> the proportional layer has topology given by the ER network with p=0.04, while in Fig. <ref> it has the same topology of the physical layer. Altogether these results show that, when pinning control is applied in the proportional control layer, it is more difficult to prevent cascading failures, regardless of the topology used in the integral control layer. Due to the challenges in preventing cascading failures, maintaining synchronization becomes more difficult as well, as it is particularly evident when considering the local control topology A^I=A^loc(A) in the integral layer (see panels g,h in Figs. <ref> and <ref>). The time evolution of the system dynamics in Fig. <ref> refers to the case where the proportional layer has topology given by the ER network and the integral layer has adjacency matrix A^I=A^loc(A^P), and reveals the emergence of oscillating dynamics in the order parameter R (see Fig. <ref> a). If both controllers are active (light blue curve) we observe no synchronization during the perturbation and partial synchronization (R≈ 0.4) when the perturbation ends, analogously to the case when only the integral control is present (black curve). Conversely, if only the proportional control scheme is active, R (orange curve) shows a higher level of synchronization during the perturbation but a lower level of synchronization when the perturbation ends, with respect to the previous cases. While the presence of the integral control increases the number of cascading failures (panel e, light blue and black curves), the dynamics in the presence of the proportional control alone shows a larger power loss (panel b) but full prevention of cascading failures once the node is reconnected (panel e).Let us move to illustrate another case study, where now the perturbation is applied to node 28. Similarly to node 24, also this node is a generator, but it is connected with two (rather than three) other power units.As for node 24, also the removal of node 28 does not give rise to a cascading failure, even in the absence of control, according to the analysis reported in Sec. <ref>. In this case, the region where it is possible to achieve both cascading failure and synchronization control is wider with respect to the previous cases (see Fig. <ref>). In particular, in the entire parameter region no cascading failures are observed at the end of the perturbation, regardless of the topology used in the integral control layer. We notice that, in Fig. <ref>, we have used pinning control in both proportional and integral layer. Full control is found in a region that extends in the whole parameter space also when all nodes are controlled in the proportional layer. While the standard deviation Δω remains relatively small in all the parameter region both during the perturbation (panels c, m) and under unperturbed conditions (panels d, n), for the extended topology and for the local topology A^loc(A^P), the control of synchronization slightly deteriorates for decreasing values of G_P, when we apply the local topologyA^loc(A) in the integral layer. Similar results have been also found when the proportional control layer is chosen to be equal to the physical layer.Lastly, we analyse the effect of the application of the perturbation to node 10, which is a load bus of the power system, connected to two other nodes (both of them are loads). The results for this node are illustrated in Fig. <ref>. Node 10 is a critical node, whose disconnection can generate a cascading failure. Since the integral control scheme is designed for controlling generators only, it is more difficult to secure a good synchronization level keeping the standard deviationΔω low, as the perturbation is applied, in this case, on a load. Therefore, when the integral layer has a local topology, i.e., A^I=A^loc(A), full control of the cascading failures is impossible (panels e, f). This also affects the level of synchronization in terms of standard deviation Δω that increases when G_P decreases. However, when the integral layer has a local topology induced by the proportional layer, i.e., A^I=A^ext(A), we find a quite large region, occurring for high values of G_P (red region in panel b), where the level of synchronization is large (panel d). The region, where cascading failures are completely avoided, increases for increasing G_P and requires a large enough value of G_P. The best performance is obtained for A^I=A^ext(A), which ensures a wider region where it is possible to simultaneously maintain the system synchronized and prevent cascading failures (panels b and d, respectively)§ CONCLUSIONS In this paper, we have investigated the joint application of a proportional and an integral control layer to a power system subject to large perturbations that can cause the failure of a node of the grid. When this occurs, the system may experience both loss of synchrony and the onset of a cascading failure. To model the power system, we have considered swing equations coupled with an overflow condition that implements a simple shutdown mechanism for the lines. As demonstrated in <cit.>, incorporating other protection mechanisms into dynamical models of power grids can crucially result in cascading failures having different sizes and involving different lines. Considering this, our model represents a parsimonious choice in relation to the primary research question of the paper and the associated computational efforts to address it. We expect that our findings remain applicable under diverse assumptions about the power grid dynamics and the protection mechanisms, albeit with potentially varying quantitative outcomes.Our main result is that the multi-layer approach is effective in maintaining a high level of synchronization, while simultaneously preventing the occurrence of cascading failures. To achieve this, the coupling coefficients in the two control layers, namely the control gains, need to be tuned. Although, from a control perspective, having analytic formulas or even a chart to guide the tuning of these parameters would be desirable, we have found that the non-trivial interplay between topologies of the two layers, dynamical parameters of the model, and the properties of the node perturbed makes difficult to find a general solution to this problem. For this reason, one can resort to the numerical analysis of the system behavior vs. the control gains, which allow for the identification of the region where control is effective, once fixed the system features above mentioned. Another aspect to be critically examined is the possibility of reducing the number of nodes where control is applied. While we have found that the integral control may be applied only to generators, effectively reducing the set of nodes to control, our findings seem to indicate that the same does not hold, in general, for the proportional layer, which requires a larger sets of nodes to control.§ ACKNOWLEDGMENTS This work was supported by the Italian Ministry for Research and Education (MIUR) through Research Program PRIN 2017 under Grant 2017CWMF93, project "Advanced Network Control of Future Smart Grids - VECTORS".§ DATA AVAILABILITY All numerical results have been obtained by C or MATLAB code developed by the Authors. The network evolution equationshave been integrated by employing a 4th order Runge-Kutta scheme with fixed time step dt = 0.01. The data associated with this study are available from the corresponding author upon reasonable request. § DECLARATIONS Conflict of interest. The authors have no conflicts of interest.IEEEtran | http://arxiv.org/abs/2312.16508v1 | {
"authors": [
"Simona Olmi",
"Lucia Valentina Gambuzza",
"Mattia Frasca"
],
"categories": [
"eess.SY",
"cs.SY",
"math.DS",
"nlin.AO",
"physics.class-ph"
],
"primary_category": "eess.SY",
"published": "20231227103431",
"title": "Multilayer control of synchronization and cascading failures in power grids"
} |
Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, ChinaKey Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, ChinaKey Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, ChinaCorresponding author: [email protected] Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Department of Physics and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China Institute of Interdisciplinary Studies, Hunan Normal University, Changsha, 410081, China Simultaneous ground-state cooling of two levitated nanoparticles is a crucial prerequisite for investigation of macroscopic quantum effects such as quantum entanglement and quantum correlation involving translational motion of particles. Here we consider a coupled cavity-levitated-particle system and present a detailed derivation of its Hamiltonian. We find that the y-direction motions of the two particles are decoupled from the cavity field and both the x- and z-direction motions, and that the z-direction motions can be further decoupled from the cavity field and the x-direction motions by choosing proper locations of the particles. We study the simultaneous cooling of these mechanical modes in both the three-mode and five-mode cavity-levitated optomechanical models. It is found that there exists the dark-mode effect when the two tweezers have the same powers, which suppress the simultaneous ground-state cooling. Nevertheless, the simultaneous ground-state cooling of these modes can be realized by breaking the dark-mode effect under proper parameters. Our system provides a versatile platform to study quantum effects and applications in cavity-levitated optomechanical systems.Simultaneous ground-state cooling of two levitated nanoparticles by coherent scattering Jie-Qiao Liao January 14, 2024 =======================================================================================§ INTRODUCTIONWith the development of micro- and nano-fabrication techniques, great advances have recently been achieved in cavity optomechanics, especially on the fundamentals of quantum physics and modern quantum technology <cit.>. The optically levitated particles, as a kind of novel optomechanical platform, have attracted much attention from the communities of quantum optics and quantum information <cit.>. In 1970s, it has been discovered that the particles can be levitated by focusing beams of light <cit.>, and this discovery has played a crucial role in advancing the field of atom trapping and cooling <cit.>. In recent years, much attention has been paid to quantum manipulation of the translation and rotation of the center-of-mass of particles, and great advances have been made in this platform, such as the realization of a controllable torque induced by the spins of atoms embedded in a microscale object <cit.>, the measurement of the Brownian motion of micrometer-sized beads <cit.>, and the cooling of the motion of particles into quantum ground state <cit.>. The levitated particles can also be utilized for quantum precision measurements, including acceleration measurement <cit.> , mass measurement <cit.>, and gyroscope <cit.>.The levitated nanoparticles was conceived as a candidate to explore macroscopic quantum phenomena <cit.>. This is because the nanoparticles are considered as a kind of macroscopic quantum system, and they can be levitated in a high vacuum <cit.>, which reduces the thermal contact between mechanical motion and environment. As a result, these systems have exceptionally high mechanical quality factors, and are considered as an excellent candidate for studying low-dissipation optomechanics. The first step to exploring quantum effects in macroscopic mechanical systems is the cooling of the mechanical systems to their ground states <cit.>. It has been reported that the levitated particles can be significantly cooled via feedback cooling <cit.> and sideband cooling <cit.>. The standard sideband-cooling method in optomechanical systems typically requires an externally red-detuned pumping field to remove the energy from the particles <cit.>. However, high driving powers will lead to the trapping of cavity fields for the optically levitated systems, and then reducing the cooling rate <cit.>. In addition, the laser-phase noise can hinder ground-state cooling at the relevant frequencies of the trapped nanoparticles <cit.>. To overcome these challenges, the coherent scattering technique has been introduced into the levitated particle systems <cit.>, drawing from atomic physics experiments <cit.>. This method harnesses higher optical trapping powers and larger particles to achieve stronger coupling strengths <cit.>, thereby paving the way to ultra-strong coupling <cit.> and leading to novel quantum optomechanical effects. These works were mostly based on the cooling of a single particle. In parallel, an increasing amount of research is focusing on the field of multi-levitated particles <cit.>. Compared with other mechanical oscillator arrays, the array of optically levitated particles has better controllability <cit.>. Motivated by these advances, we are committed to study the simultaneously cooling multiple levitated particles and to explore more novel quantum effects.In this paper, we study the simultaneous ground-state cooling of two levitated particles coupled to a cavity. Similar to a single-particle case, the photon enters the cavity via the scattering process, which provides the mechanism for cooling the center-of-mass motions of the particle. For multiple particles levitated simultaneously, the mechanical effect of the scattered light between the particles has been ignored in the past. However, the recent experimental observations indicate that this scattering effect cannot be neglected for some cases <cit.>. The redistributed light field greatly affects the equilibrium position of the particles. Therefore, we consider the optical binding effect between the two nanoparticles. In particular, we find that the y-direction motions of the two particles decouple from other degrees of freedom in this system. We also find that, when the two nanoparticles are located at the cavity nodes, the cavity mode only couples to the x modes of the particles. Benefiting from the extremal isolation of the system, the simultaneous ground-state cooling of the center-of-mass motions of the two nanoparticles along x-axis can be realized. When the two nanoparticles are not located at the specific positions, both the x mode and the z mode are coupled to the cavity mode, then the two modes of the two nanoparticles can also be cooled into their ground states. In addition, we find that there exists the dark-mode effect when the powers of the two tweezers are identical, and the dark-mode effect will suppress the cooling of the system. By choosing proper parameters to avoid the dark-mode existing condition, then the dark-mode effect can be broken, and the simultaneous ground-state cooling can be achieved.The rest of the paper is organized as follows. In Sec. <ref>, we introduce the system consisting of two levitated nanoparticles trapped in a Fabry-Pérot cavity, and analytically derive the Hamiltonians. In Sec. <ref>, we investigate the simultaneous ground-state cooling of the x-direction motions of the two particles, which are located at the nodes of the cavity. In Sec. <ref>, we study the simultaneous ground-state cooling of both the x- and z-direction motions of the two nanoparticles in a general case. Finally, we briefly conclude this work in Sec. <ref>. § PHYSICAL MODEL AND HAMILTONIANSWe consider a coupled cavity-levitated-nanoparticle system, in which two dielectric nanoparticles trapped by two optical tweezers are coupled to the field modes in a Fabry-Pérot cavity. As shown in Fig. <ref>, the Fabry-Pérot cavity, with the cavity axis aligning with the x direction, contains two nanoparticles. The two nanoparticles, placed at R̂_1 and R̂_2, have the radius a_0=90 nm, density ρ≈2200 kg/m^3, and dielectric constant ϵ _r=2.07. We assume that the two optical tweezers have electric fields propagating along the z axis, with the corresponding polarizations e_ tw^(1) and e_tw^(2) along the y direction. The foci of the two optical tweezers are located at the positions ( x_10,0,0) and ( x_20,0,0), separated by a distance D. The frequency of the two optical tweezers is ω _tw=2π c/λ _tw, where c is the speed of light in a vacuum and λ _tw is the wavelength of the tweezers. The total Hamiltonian of the system can be written asĤ_tot=Ĥ_np+Ĥ_cav+Ĥ_int.Here, the Hamiltonian Ĥ_np describes the kinetic energy of the center-of-mass motion for the two nanoparticles, and it takes the formĤ_np=∑_j=1,2𝐏̂_j^2/2m,where 𝐏̂_j=( P̂_jx,P̂_jy,P̂_jz) is the 3-dimensional momentum operator for the jth ( j=1,2) nanoparticle with mass m. The second term in Eq. (<ref>) readsĤ_cav = 1/2∫ [ ε _0E_ cav^2( r) +B _cav^2( r)/μ _0 ] dr= ∑_jħω _j(â_j^†â_j+1/2),where E_cav and B_cav are, respectively, the electric and magnetical fields in the cavity, and ε _0 (μ _0) is the free space premittivity (permeability). In addition, ω _j is the resonance frequency of the jth cavity mode (described by the creation and annihilation operators â_j^† and â_j) in the optical cavity. Since the frequency of the center-of-mass motion for the nanoparticles is much smaller than the free spectrum range of the cavity, we could consider that the two nanoparticles are coupled to a single cavity mode. Then, the Hamiltonian of the optical cavity can be approximately denoted as Ĥ_cav≈ħω_cavâ^†â, where ω_cav is the resonance frequency of the cavity mode under consideraion with the wave number k, described by the creation and annihilation operators â^† and â. Note that the zero-point fluctuation term ħω_cav/2 has been omitted in the Hamiltonian.The last term Ĥ_int in Eq. (<ref>) describes the interactions between the nanoparticles and the electromagnetic fields. In the Rayleigh regime, the radius of the nanoparticle is much smaller than the optical wavelength (a_0≪ λ), and the interaction Hamiltonian between the nanoparticles and the electric fields can be written as <cit.>Ĥ_int≈ - 1/2∑_j=1,2αE^2( R̂_j),where α=ε _0ε _cV is the particle polarizability with ε _c=3(ϵ _r-1)/(ϵ _r+2) and V being the volume of the nanoparticle.In Eq. (<ref>), E(R̂_j) represents the electric field at the position of the jth particle, where R̂_j=r_j0+ r̂_j denotes the center-of-mass position operator of the jth particle, with r_j0=( x_j0,0,0) being the focus of the jth optical tweezer along the cavity axis and r̂_j=(x̂_j,ŷ_j,ẑ_j) the position operator of the jth particle.§.§ The initial electric fieldIn general, the total electric field at the position r=( x,y,z) can be approximately written as a sum of the cavity field Ê_cav and the fields ℰ_tw^(j) for the two optical tweezers,Ê_I( r) =Ê_cav ( r) +∑_j=1,2ℰ_tw ^(j)( r) .The first term in Eq. (<ref>) describes the single-mode electric field of the cavity, which is given byÊ_cav( r) =ϵ _cav cos( kx-ϕ) ( â^†+â) e_ cav,where ϵ _cav=√(ħω _cav/( 2ε _0V_cav)) is the amplitude at the center of the cavity with V_cav being the cavity volume. For simplicity, we consider the case ϕ =0 and the y-axis polarized cavity mode in this paper.We assume that the two tweezers are sufficiently spaced apart such that the influence of the electric field ℰ_tw1 of tweezer 1 on the distant nanoparticle 2 can be neglected, and vice versa. Then the total electric fields at the positions of the two nanoparticles 1 and 2 can be approximated as Ê_I^(1)( R̂_1) = ℰ_tw1( R̂_1) +Ê _cav( R̂_1), Ê_I^(2)( R̂_2) = ℰ_tw2( R̂_2) +Ê _cav( R̂_2). Typically, the fields of the optical tweezers are considered incoherent states and thus they can be well described by classical fields. Then the electric field of j th optical tweezer can be expressed byℰ_twj( r,t) =1/2E_j0( r) e^-i[ k_twz+ϕ _t( r) ] e^-iω _twte_tw^(j)+H.c.,with the laser frequency ω _tw=ck_tw and wave number k_tw=2π /λ _tw of the tweezer. We assume that the propagating directions of two beams are parallel, and that the polarizations e_tw^(1) and e_tw^(2) of the electric fields are along the y direction. Then the real amplitude E_j0( r) in Eq. (<ref>) can be written asE_j0( r) =ϵ _tw^(j)1/√( 1+( z/z_R) ^2)exp(-( x-x_j0) ^2+y^2/ W^2( z) ),where ϵ _tw^(j)=√(4P_tw^(j)/(πε _0cW_t^2)) is the amplitude of the electric field, with P_tw^(j) being the power of the jth laser and W_t the tweezer waist at the focus. In addition, z_R=π W_t^2/λ is the Rayleigh range and W( z) =W_t√(1+( z/z_R) ^2). Note that the tweezer Gouy-phase ϕ _t( r ) ≈arctan( z/z_R) -k_twz [( x-x_j0)^2 +y^2]/(2z^2+2z_R^2) in Eq. (<ref>) can be neglected, since the Rayleigh range z_R is typically several orders of magnitude larger than other length scales. §.§ The radiation fieldsIn the Rayleigh regime, these nanoparticles embedded in the electric fields will possess an electric dipole moment, which will create electromagnetic radiation by charge oscillation. Physically, the frequency of the radiation field is equal to that of the incident field. The electric field at position r_1 generated by the oscillating dipole at the position r_2 is given byE _rad( r_1) =𝐆( r_1- r_2) P( r_2) ,where the field propagator (also know as the dyadic Green's function) between the two dipoles is given by <cit.>𝐆( r_1-r_2) =e^ik_0r_0/4πε _0r_0[ ( 1-ik_0r_0/ r_0^2) 3r_0( r_0) ^T-r_0^2/r_0^2.. +k_0^2r_0^2-r_0( r _0) ^T/r_0^2].Here, “T” denotes the matrix transpose, k_0 is the wave number of the incident field, r_0=|r_0| =| r_1-r_2| is the distance between the two dipoles, and 𝐆( r_1-r _2) =𝐆( r_2-r_1) = 𝐆( r_0). In the following analyses, the Green function can be divided into two parts: α𝐆( r_0) =e^ik_0r_0 [ η _n(D/r_0)^3( 1-ik_0r_0) 𝐌_n( r _0) +η _f(D/r_0)𝐌_f( r_0) ] <cit.>, where η _n=1 /4πε_0D^3 is the near-field constant and η _f= k_0^2/4πε _0D is the far-field constant. The near-field constant η _n is much smaller than the far-field constant η _f in the far-field regime k_0r_0≫ 1. In addition, we introduce the near-field tensor𝐌_n( r) =1/r^2( [ 3x^2-r^23xy3xz;3xy 3y^2-r^23yz;3xz3yz 3z^2-r^2 ] ),and the far-field tensor𝐌_f( r) =1/r^2( [ r^2-x^2 -xy -xz; -xy r^2-y^2 -yz; -xz -yz r^2-z^2 ] ). The total electric field for the jth (j=1,2) particle is given by the sum of the incident field E_I^(j) and the field emitted by the other dipole,E_tot^(j)( R̂_j) =E_I^(j)( R̂_j) +𝐆( R̂_0) P^(j̅)( R̂_j̅) .Here P^(j̅)( R̂_j̅) =α E_tot^(j̅)( R̂ _j̅) is the dipole moment generated by the j̅th dielectric nanoparticle, where the index j̅ denotes the other particle with respect to the jth particle (namely 1̅=2 and 2̅=1). Since the cavity field and the tweezer field have different wave numbers, the Green function will take two distinct forms, 𝐆_cav and 𝐆_tw, corresponding to the wave numbers of the cavity field and the tweezer field, respectively. Consequently, the electric field E_ tot^(j)(R̂_j) can be divided into two parts, Ê_tot^(j)(R̂_j)=E_tot,tw^(j)(R̂_j)+E_tot,cav^(j)(R̂_j), withE_tot,tw^(j)( R̂_j) = ℰ_twj( R̂_j,t) +𝐆_tw( R̂_0) αE_tot,tw^(j̅)( R̂_j̅) , Ê_tot,cav^(j)( R̂_j)=Ê_cav( R̂_j) +𝐆_cav(R̂_0) αÊ_tot,cav^(j̅)( R̂_j̅) . Since the magnitude of α𝐆 is considerably small compared to the trapping fields, we can neglect the second-order terms in Eq. (<ref>). Then we obtain the total electric field consisting of the incident field [the tweezer field ℰ_tw( R̂,t) and the cavity field Ê_cav( R̂)] and the emitted field [ℰ_Gtw( R̂) and Ê_Gcav( R̂)] by the dipole,Ê_tot^(j)( R̂_j) =ℰ_twj( R̂_j,t) +Ê _cav( R̂_j)+ℰ_Gtwj̅( R̂_j) + Ê_Gcav( R̂_j),where ℰ_Gtwj̅( R̂_j) = Re[𝐆_tw( R̂_0) α ℰ_twj̅( R̂_j,t)] describes the radiation field generated by the oscillating dipoles at R̂ _j̅, which is induced by the j̅th tweezer field ℰ _twj̅( R̂_j̅). In addition, Ê_ Gcav( R̂_j) =Re[𝐆_cav( R̂ _0) αÊ_cav( R̂ _j̅)] represents the radiation field produced by the dipole moment, which is induced by the cavity field at the position R̂_j̅. §.§ The interaction HamiltoniansIn this section, we present the detailed expressions of the interaction Hamiltonians by putting the electric field operator ℰ_tot^(j)( R̂_j,t) given by Eq. (<ref>) into Hamiltonians (<ref>). The interaction Hamiltonian can be divided into two parts Ĥ_int=∑_j=1,2Ĥ_int ^(j), where the forms of the two parts are similar. Below, we take the jth (for j=1,2) particle as an example. The Hamiltonian Ĥ_int^(j) can be written asĤ_int^(j) = -1/2α [ ℰ_tw j( R̂_j,t) +Ê_cav( R̂_j) +ℰ_Gtwj̅( R̂_j) + Ê_Gcav( R̂_j) ] ^2,which can be further divided into six termsĤ_int^(j) = Ĥ_cs^(j)+Ĥ_rad-rad^(j)+Ĥ_ tw-Gtw^(j) +Ĥ_cav-Gcav^(j)+Ĥ_tw-Gcav^(j)+Ĥ_cav-Gtw^(j),each of which represents a special physical interaction.The first term Ĥ_cs^(j) in Eq. (<ref>) is the standard interaction Hamiltonian of the cavity-field with the jth levitated particleĤ_cs^(j)=-1/2α [ ℰ_twj( R̂_j,t) +Ê_cav( R̂ _j) ] ^2,which consists of three parts Ĥ_cs^(j)=Ĥ_tw-tw^(j)+Ĥ_cav-cav^(j)+Ĥ_cav-tw^(j). Since the jth particle is trapped near the focus of the jth tweezer, we can approximate the electric field of the tweezer by its expansion near r_j0=( x_j0,0,0). Then, we can obtain the harmonic potential energy of the tweezer as Ĥ_tw-tw^(j)=-αℰ_twj^2( R̂_j)/2≈∑_Q mω_jQ^2Q̂_j^2/2 with { Q=x,y,z}, where we employ the rotating-wave approximation and neglect both the exp( ± 2iω _twt) terms and the constant terms. This means that the jth nanoparticle is trapped by the tweezer with trapping frequencies [ ω _jx,ω _jy,ω _jz] =√(α /2m)ϵ _ tw^(j)[ √(2)W_t^-1,√(2)W_t^-1,z_R^-1]. The square term of the cavity field contains both the cavity frequecy shift and the radiation pressure effect, Ĥ_cav-cav^(j)=-αÊ_cav^2(R̂_j)/2≈ħω_shâ^†â+ħ g_ax_jâ^†âx̂_j, where ω_sh=-αϵ_cav^2cos ^2 ( kx_j0)k/ħ and g_ax_j=αϵ _cav^22cos ( kx_j0) sin (kx_j0) k/ħ. In addition, the interaction term between the tweezer and cavity fields is given by Ĥ_tw-cav^(j)=-α[ ℰ_twj(R̂_j)·Ê_cav(R̂_j)] ≈ħΩ ( â^†+â)+ħ g_x_j ( â^†+â) x̂_j+iħ g_z_j(â-â^†) ẑ_j, which describes the displacement of the cavity mode and the coupling mediated by coherent scattering, where Ω=-αϵ _cavϵ _tw^(j)cos ( kx_j0)/(2ħ), g_x_j=αϵ _cavϵ _tw^(j)sin ( kx_j0)k/(2ħ), and g_z_j=-αϵ _cavϵ _tw^(j)cos ( kx_j0) k_tw/(2ħ).The second term Ĥ_rad-rad^(j)in Eq. (<ref>) describes the interaction between these radiation fields at position R̂_j generated by the j̅th oscillating dipole, and it is written asĤ_rad-rad^(j)=-1/2α [ ℰ_Gtwj̅ ( R̂_j) +Ê_Gcav( R̂_j) ] ^2.We point out that the terms Ĥ_rad-rad^(j) for j=1,2 are usually small enough to be ignored.The remaining terms in Eq. (<ref>) describe the interactions between the incident field at the position of jth particle and the field at position R̂_j generated by the j̅th dipole. Concretely, these interaction Hamiltonians are given byĤ_tw-Gtw^(j) =-αℰ_twj( R̂_j,t) ·ℰ_Gtwj̅( R̂_j) , Ĥ_cav-Gcav^(j) =-αÊ_cav( R̂_j) ·Ê_Gcav(R̂_j) , Ĥ_tw-Gcav^(j) =-αℰ_twj( R̂_j,t) ·Ê_Gcav( R̂_j) ,Ĥ_cav-Gtw^(j) =-αÊ_cav( R̂_j) ·ℰ_Gtwj̅( R̂_j) . The four cross terms describe the interactions between the incident field and the radiation field, and they are generated by the mechanical effect of scattered light via the optical binding force. Equations (<ref>) and (<ref>) describe the lateral binding and longitudinal binding, respectively. The optical binding force can be calculated as <cit.>F^bind=1/2∇𝐑𝐞[ P ^∗( r_j) 𝐆( r_j- r_i) αE( r _i) ],where the force term describes the interaction between the emitted field and the dipole at r_j. Equation (<ref>) describes the interaction corresponding to the case where the two particles are placed on the x-axis, and they are trapped by the two optical tweezers with the same frequency, respectively. Here, the two tweezers polarize along the y-axis. In this case, the binding force acting on particle 1 has the following componentsF_x^(1) = α ^2E_10E_20/8πϵ _0R_0^4[3cos( k_twR_0) +3k_twR_0sin( k_tw R_0)-2( k_twR_0) ^2cos( k_tw R_0) -( k_twR_0) ^3sin( k_tw R_0) ], F_y^(1) = 0, F_z^(1) = α ^2E_10E_20/8πϵ _0R_0^4[-k_ twR_0sin( k_twR_0) +( k_twR_0) ^2cos( k_tw R_0) +( k_twR_0) ^3sin( k_tw R_0) ], along the x-, y-, and z-axes, respectively, with the distance R_0 between the two particles.The Green function 𝐆(r_0) [Eq. (<ref>)] contains these terms proportional to r_0^-1, r_0^-2, and r_0^-3. In the far-field region k_0r_0≫ 1, the terms with r_0^-1 dominate in the case, and then the Green function retains only the last term, i.e., α𝐆( r_0)≈ e^ik_0r_0η _f(D/r_0)𝐌_f( r_0). To investigate the specific scope of the far-field region , we compare in Fig. <ref> the optical binding forces corresponding to the exact calculation and the far-field approximation. As shown in Fig. <ref>, we can find that the forces experience oscillations, and that as the scaled displacement increases, the oscillation amplitudes of the optical binding forces decrease. This oscillation behavior can be understood because the forces are functions of the trigonometric functions, as shown in Eqs. (<ref>). Meanwhile, it can be seen from Fig. <ref> that the exact optical-binding force is very close to the approximate optical-binding force when R_0/λ>1, which is consistent with the fact that the near-field constant is much smaller than the far-field constant when R_0∼λ. Since these interaction terms described by Eqs. (<ref>) involve two particles, below we analyze these cross interactions between the two particles together in the far-field regime. The first cross term Ĥ_tw-Gtw=Ĥ_tw-Gtw^(1)+Ĥ_tw-Gtw^(2) describes the lateral binding of the two identical spherical nanoparticles. It is given byĤ_tw-Gtw ≈ -1/2αη _f_tw( D/R̂_0) E_10( R̂_1) E_20( R̂ _2)×{cos[ k_tw(Ẑ_0+R̂_0)]e_tw^(1)𝐌_f_tw( R̂_0) e_tw^(2) + cos[ k_tw(Ẑ_0-R̂_0)]e_tw^(2)𝐌_f_tw( R̂_0) e_tw^(1)},where η_f_tw and 𝐌_f_tw correspond to the far-field constant and the far-field tensor for the wave number k_tw, respectively. Expanding the corresponding electric field near the foci of tweezers 1 and 2, the Ĥ_tw-Gtw can be rewritten asĤ_tw-Gtw ≈ ħ R_α( x̂_1-x̂_2) +∑_q=x,y,z[ν_q(q̂_1^2+q̂_2^2)+1/2k_q( q̂_1-q̂_2) ^2] ,where the first term corresponds to a shift of the equilibrium position of the center-of-mass motion along x-axis, and the displacement factor is given byR_α = αη _f_twϵ _ tw^(1)ϵ _tw^(2)[ k_twsin( k_tw D) +D^-1cos( k_twD)]/ħ.The second term in Eq. (<ref>) describes the frequency shifts of the center-of-mass motion, with the frequency shifts ν_x =ν_y=αη _f_twϵ _tw^(1)ϵ _ tw^(2)cos( k_twD)/W_t^2 ,ν_z =αη _f_twϵ _tw^(1)ϵ _ tw^(2)cos( k_twD)/(2z_R^2) . The third term in Eq. (<ref>) describes the interaction between the two particles mediated by the light scattering, and the particle-particle coupling strengths are given by k_x= -αη _f_twϵ _tw^(1)ϵ _ tw^(2)[ ( 2D^-2-k_tw^2) cos( k_ twD) +2k_twD^-1sin( k_twD) ] , k_y = αη _f_twϵ _tw^(1)ϵ _ tw^(2)[ 3D^-2cos( k_twD) +k_twD^-1sin( k_twD) ] , k_z = αη _f_twϵ _tw^(1)ϵ _ tw^(2)[ ( D^-2+k_tw^2) cos( k_tw D) +k_twD^-1sin( k_twD) ] .The second cross term Ĥ_cav-Gcav given by Eq. (<ref>) represents the longitudinal binding of the two spherical nanoparticles,Ĥ_cav-Gcav = Ĥ_cav-Gcav^(1)+Ĥ_cav-Gcav^(2)≈ -4αη _fϵ _cav^2( D/R̂_0) cos ( kX̂_1) cos ( kX̂_2)×cos ( kR̂_0) ( â^†â+1/2) e_cav𝐌_f( R̂_0) e_cav,where η_f and 𝐌_f correspond to the far-field constant and the far-field tensor for the wave number k, respectively. The Hamiltonian Ĥ_cav-Gcav can be further re-expressed asĤ_cav-Gcav ≈ -4αη _fϵ _cav^2cos ^2( kD/2) cos( kD) â^†â +ħ g_αx̂_1( â^†â+1/2) -ħ g_αx̂_2( â^†â+1/2),where the first term is the frequency shift term and the last two terms are the optomechanical coupling terms, with the coupling strengthg_α = 4αη _fϵ _cav^2[ ( ksin( kD) +D^-1cos( kD) )×cos ^2( kD/2) +ksin( kD) cos( kD)/2 ]/ħ. The third cross term Ĥ_tw-Gcav describes the interaction between the jth tweezer field at R̂_j and the field generated by the other dipole caused by the cavity field. This term readsĤ_tw-Gcav = Ĥ_tw-Gcav^(1)+Ĥ_tw-Gcav^(2)≈ -1/2αη _fϵ _cav( D/R̂_0) cos ( kR̂_0) E_10( R̂_1) cos ( kX̂_2)×( â^†e^ik_twẐ_1+âe^-ik_tw Ẑ_1) e_tw^(1)𝐌_f( R̂ _0) e_cav -1/2αϵ _cavη _f( D/R̂_0)cos ( kR̂_0) E_20( R̂_2) cos ( kX̂_1)×( â^†e^ik_twẐ_2+âe^-ik_tw Ẑ_2) e_tw^(2)𝐌_f( R̂ _0) e_cav,which can be rewritten asĤ_tw-Gcav ≈ ħΩ_α( â^†+â) +∑_j=1,2ħ g_α x_j( â^†+â) x̂_j +∑_j=1,2iħ g_α z_j(â-â^†) ẑ_j.In Eq. (<ref>), we introduced Ω_α=-αη _fϵ _cavcos( kD)cos( kD/2) ( ϵ _tw^(1)+ϵ _ tw^(2))/(2ħ) and the coupling strengths g_α x_1 = 1/2ħαη _fϵ _cav[ ( ksin( kD) +D^-1cos( kD) ) ( ϵ _tw^(1)+ϵ _tw^(2)) ×. cos( kD/2) +kcos( kD) ϵ _tw^(2)sin( kD/2) ],g_α x_2 = -1/2ħαη _fϵ _cav[ ( ksin( kD) +D^-1cos( kD) ) ( ϵ _tw^(1)+ϵ _tw^(2)) ×. cos( kD/2) +kcos( kD) ϵ _tw^(1)sin( kD/2) ],g_α z_j = -1/2ħαη _fϵ _cav ϵ _tw^(j)k_twcos( kD) cos( k D/2).Finally, the interaction Hamiltonian between the cavity field and the field emitted by the dipole induced by the tweezer fields readsĤ_cav-Gtw = Ĥ_cav-Gtw^(1)+Ĥ_cav-Gtw^(2)≈ -1/2αη _f_twϵ _cav( D/R̂_0) cos ( kX̂_1) E_20( R̂_2) ( âe^-ik_tw( R̂_0+Ẑ_2) .. +â^†e^ik_tw( R̂_0+Ẑ_2) ) e_cav𝐌_f_tw( R̂ _0) e_tw^(2) -1/2αϵ _cavη _f_tw( D/R̂_0)cos ( kX̂_2) E_10( R̂ _1) ( âe^-ik_tw( R̂_0+Ẑ_1) .. +â^†e^ik_tw( R̂_0+Ẑ_1) ) e_cav𝐌_f_tw( R̂ _0) e_tw^(1) ,which can be further expressed asĤ_cav-Gtw ≈ ħ(Ω_βâ+Ω_β^*â^†) +∑_j=1,2ħ( g_β x_jâ+g_β x_j^∗â^†) x̂_j +∑_j=1,2iħ( g_β z_jâ-g_β z_j^∗â^†) ẑ_j.Here, the displacement factor of mode a is Ω_β=-αη _f_ twϵ _cav ( ϵ _tw^(1)+ϵ _tw^(2))cos( kD/2)e^ik_twD/(2ħ) and the coupling strengths are g_β x_1 = 1/2ħαη _f_twϵ _ cav[ (D^-1-ik_tw) ( ϵ _ tw^(1)+ϵ _tw^(2)). ×cos( kD/2) +ϵ _tw ^(2)ksin( kD/2) ] e^ik_twD, g_β x_2 = -1/2ħαη _f_twϵ _ cav[ (D^-1-ik_tw) ( ϵ _ tw^(1)+ϵ _tw^(2)). ×cos( kD/2) +ϵ _tw ^(1)ksin( kD/2) ] e^ik_twD, g_β z_j = -1/2ħαη _f_twϵ _ cavϵ _tw^(j)k_twcos( kD/2) e^ik_twD.Based on the above analyses, we obtain the total Hamiltonian in the rotating frame [defined by the unitary transformation operator exp(-iω_twâ^†ât)] asĤ_tot = ħΔ ^'â^†â+∑_j=1,2 𝐏̂_j^2/2m+∑_j=1,2∑_Q=x,y,z1/2m_jω̃_jQ^2Q̂_j^2 +∑_j=1,2ħg̃_ax_jâ^†âx̂_j-∑_Q=x,y,z k_QQ̂_1Q̂_2 +ħ ( Ω̃â+Ω̃ ^∗â^†) +ħR̃( x̂_1-x̂_2)+ħ∑_j=1,2[ â( g̃_x_jx̂_j+ig̃_z_jẑ_j) + H.c.] ,where the effective driving detuning is Δ ^'=Δ -2αϵ _cav^2cos ^2( kD/2)/ħ -4αϵ _cav^2η _fcos ^2( kD/2) cos( kD)/ħ with Δ=ω_cav-ω_tw. The jth particle exhibits a Q-mode frequency ofω̃_jQ=√(ω_jQ^2+2ν_Q/m+k_Q/m) with j=1,2 and Q=x,y,z,the displacement factor of the cavity mode and x modes are, respectively, given by Ω̃=Ω+Ω_α+Ω_β and R̃=R_α+g_α/2, and the optomechanical couplings are given by g̃_ax_j=g_ax_j+g_α and g̃_x(z)_j=g_x(z)_j+g_α x(z)_j+g_β x(z)_j. It can be seen from Eq. (<ref>) that the y modes of the two particles are only coupled to each other and decoupled from the other modes, so we will only consider the x and z modes in the following discussions. We also note that, for concise, the hat for all the operators will be ignored in the following.§ SIMULTANEOUS GROUND-STATE COOLING OF THE X-DIRECTION MOTIONSFor cooling of the x-direction (along the cavity axis) motion of a single-levitated nanoparticle, the best location of the particle is the nodes of the cavity mode |sin(kx_0)|=1 <cit.>. Below we consider that the two particles are located at x_10=D/2 and x_20=-D/2, which satisfy sin (kx_10)=1 and sin (kx_20)=-1. To cool the mechanical modes, we consider the red-sideband resonance regime: Δ =ω _cav-ω _tw=ω _m. Concretely, we assume that the tweezer laser has the wavelength λ _tw=1064 nm and the trapping frequency of the particles is ω _m/2π∼ 100 MHz. Therefore, the driving frequency is much larger than the resonance frequency of the oscillator ω _tw≫ω _m, then the wave number of the cavity field is approximately equal to that of the tweezer field k≈ k_tw, and we can make the following approximations: sin( k_twD/2) ≈ 1, cos( k_twD/2) ≈ 0, cos( kD) ≈cos (k_twD)≈ -1, sin( kD) ≈cos (k_ twD)≈ 0, and e^-ik_twD≈ e^ik_tw D≈ -1. In this case, the Hamiltonian of the system is reduced toH_tot = ħΔ a^†a+∑_j=1,2(P_jx^2/2m+mΩ _j^2x_j^2/2)+ħ R( x_1-x_2) +∑_j=1,2ħ g_j( a^†+a) x_j -k_xx_1x_2,where the x-mode oscillation frequency of the jth nanoparticle is given by Ω _j^2=(αϵ _tw^(j)^2/W_t^2-2αη _f_ twϵ _tw^(1)ϵ _tw^(2)/W_t^2 +k_x)/m, the coupling strengths are given by g_1 =αϵ _cav[ ϵ _ tw^(1)-( η _f+η _f_tw) ϵ _ tw^(2)] k/(2ħ), g_2 =-αϵ _cav[ ϵ _ tw^(2)-( η _f+η _f_tw) ϵ _ tw^(1)] k/(2ħ), and the displacement factor isR=-αη _f_twϵ _tw^(1)ϵ _tw^(2)/(ħ D).We see from Hamiltonian (<ref>) that there exist bilinear couplings between the cavity field and the x-direction motions, the two harmonic oscillations are coupled with each other via the x-x interaction, and both the two oscillators are displaced in the x-direction. For analyzing the cooling of the x-direction motion, below we will work in the displaced representation of the system such that the excitations are associated with the fluctuations. For convenience, we introduce the dimensionless position and momentum operators √(2)q_j=1,2=x_j/x_j,zpf and √(2)p_j=1,2=P_jx/p_j,zpf, where the zero-point motions are x_j,zpf=√(ħ/(2mΩ_j) ) and p_j,zpf=√( mΩ_jħ/2 ). We also introduce the quadrature operators X=(a^†+a) /√(2) and Y=i( a^†-a) /√( 2) for the cavity field.Based on Eq. (<ref>), we can obtain the quantum Langevin equations for the system asq̇_1 = Ω _1p_1, ṗ_1 = -Ω _1q_1-√(2)G_1X-R_1+G_xq_2-γ _1p_1+f_th^(1),q̇_2 = Ω _2p_2, ṗ_2 = -Ω _2q_2-√(2)G_2X+R_2+G_xq_1-γ _2p_2+f_th^(2),Ẋ = Δ Y-κ X+√(2κ)X_in,Ẏ = -Δ X-κ Y-√(2)∑_j=1,2G_jq_j+√(2κ)Y_ in, where κ and γ_j are the decay rates of the cavity mode and the x-direction motion of the jth particle, respectively. In Eqs. (<ref>), we introduce G_j=√(2)g_jx_j,zpf, G_x=2k_xx_1,zpfx_2,zpf/ħ, and R_j=√(2)Rx_j,zpf. The f_th^(j) is the stochastic thermal noise operator corresponding to the x-mode motion of the jth particle, which is determined by the zero average values⟨ f_th^(j)( t)⟩ =0,j=1,2,and the correlation function,⟨ f_th^(j)( t) f_th^(j^')( t^') ⟩ =δ _jj^'γ _j/Ω _j∫ e^-iω( t-t^') ω[ ( ħω/ 2k_BT_j) +1] dω/2π,where k_B is the Boltzmann constant, and T_j is the temperature of the thermal bath associated with the x_j mode. We assume that these mechanical modes are connected to the high-temperature reservoirs (k_BT_j≫ħΩ_j), so we can obtain [ ħΩ_j/ (k_BT_j)] +1≈ 2k_BT_j/ (ħΩ_j) in the high-temperature limit. The stochastic noise is reduced to a delta-correlation noise ⟨ f_th^(j)( t) f_th^(j^')( t^') +f_th^(j^')( t^') f_th^(j)( t) ⟩≈ 2γ _j( 2n̅_j,th+1) δ( t-t^') δ _jj^', where n̅_j,th=[ exp [ ħΩ_j /(k_BT_j) ]-1 ] ^-1≈ k_BT_j/( ħΩ_j ) is the thermal occupation number for the jth thermal bath. In addition, the X_in=( a_ in^†+a_in) /√(2) and Y_in=i( a_in^†-a_in) /√(2) are the optical noise operators, which are determined by the zero average values ⟨ X_in⟩=0 and ⟨ Y_in⟩=0. The correlation functions of these optical noise operators are given by <cit.>⟨X_in( t)X_in( t^') ⟩ =1 /2δ( t-t^') ,⟨Y_in( t)Y_in( t^') ⟩ =1 /2δ( t-t^') ,⟨X_in( t)Y_in( t^') ⟩ =i/2δ( t-t^') ,⟨Y_in( t)X_in( t^') ⟩ =-i/2δ( t-t^') . To work in the displacement representation, we re-express Eqs. (<ref>)–(<ref>) around the steady-state values by writing operators O∈{q_j=1,2,p_j=1,2,X,Y} as the sum of average value and quantum fluctuation: O=⟨ O⟩ +δ O . Then, the Langevin equations can be separated into the semi-classical equations of motion and the equations of motion for quantum fluctuations. The latter can be written asδq̇_1 =Ω _1δ p_1,δṗ_1 =-Ω _1δ q_1-√(2)G_1δ X+G_xδ q_2-γ _1δ p_1+f_th^(1), δq̇_2 =Ω _2δ p_2,δṗ_2 =-Ω _2δ q_2-√(2)G_2δ X+G_xδ q_1-γ _2δ p_2+f_th^(2), δẊ =Δδ Y-κδ X+√(2κ)δ X_ in, δẎ =-Δδ X-κδ Y-∑_j=1,2√(2) G_jδ q_j+√(2κ)δ Y_in. By introducing the operator vector𝐮( t)=(δ q_1,δ p_1, δ q_2,δ p_2 , δ X , δ Y) ^T,and the noise operator vector𝐍( t) =( 0 , f_th^(1)( t) , 0 , f_th^(2)( t) , √( 2κ)δ X_in( t) , √(2κ)δ Y_ in( t) ) ^T,Eq. (<ref>) can be expressed as a compact matrix form𝐮̇( t) =𝐀𝐮( t) +𝐍 ( t),where the coefficient matrix 𝐀 is given by𝐀=( [0 Ω _10000;-Ω _1-γ _1G_x0 -√(2)G_10;000 Ω _200;G_x0-Ω _2-γ _2 -√(2)G_20;0000 -κΔ; -√(2)G_10 -√(2)G_20 -Δ -κ ] ). To calculate the final mean phonon numbers in these mechanical modes, we introduce the covariance matrix 𝐕 defined by the matrix elements𝐕_nm=1/2[𝐮_n(∞)𝐮_m(∞)+𝐮_m(∞)𝐮_n(∞)]for n,m=1-6. The covariance matrix 𝐕 is determined by the Lyapunov equation <cit.>𝐀𝐕+𝐕𝐀^T=-𝐐,where 𝐐 is the noise correlation matrix, defined by the elements 𝐐_nm=1/2⟨𝐍_n(t)𝐍_m(t^')+𝐍_m(t)𝐍_n(t^')⟩ for n,m=1-6. Based on Eqs. (<ref>) and (<ref>), the noise correlation matrix can be obtained as𝐐=diag[ 0,γ_1( 2n̅_1,th+1),0,γ_2( 2n̅_2,th+1),κ,κ].The final mean phonon numbers of the two mechanical modes can be expressed asn̅_j =1/2[ ⟨δ q_j^2⟩ +⟨δ p_j^2⟩ -1],j=1,2,where the stationary variance of the mechanical modes is given by the corresponding diagonal matrix elements of the covariance matrix: ⟨δ q_1^2⟩ =𝐕_11,⟨δ p_1^2⟩=𝐕_22, ⟨δ q_2^2⟩ =𝐕_33,⟨δ p_2^2⟩=𝐕_44. Therefore, the final mean phonon numbers in the two mechanical modes can be obtained by solving the Lyapunov equation.In the coupled cavity-levitated-nanoparticles system, the powers of the optical tweezers affect both the resonance frequencies of the mechanical modes and the coupling strengths between the cavity mode and the mechanical modes. Below, we analyze the dependence of the cooling efficiency of the x modes of the two nanoparticles on the powers of the two optical tweezers. In Fig. <ref>, we plot the final mean phonon numbers n̅_1 and n̅_2 as functions of the powers P_tw^(1) and P_tw^(2). We can see from Fig. <ref> that the cooling of the two modes x_1 and x_2 is strongly suppressed when the two tweezers have the same powers. The cooling performance becomes better when the working point deviates the diagonal line P_tw^(1)=P_tw^(2). This phenomenon can be well explained based on the dark-mode effect <cit.>. From the expressions of Ω_j and G_j, we know that, when the powers of the two optical tweezers are equal, i.e., P_tw^(1)=P_tw^(2), the two modes x_1 and x_2 have the same resonance frequency Ω_1=Ω_2, and the optomechanical coupling strengths are equal but with opposite signs G_1=-G_2. In the following, we analyze the dark-mode effect in the system under these parameters. Using the dimensionless operators, the Hamiltonian characterizing the quantum fluctuations can be written asH_lin/ħ =Δ a^†a+∑_j=1,2[Ω _j/2 ( p_j^2+q_j^2) +G_j( a^†+a) q_j]-G_xq_1q_2.To clearly see the dark-mode effect in the system, we define two hybrid modes of the two mechanical modes asq_+ =G_1q_1+G_2q_2/√(G_1^2+G_2^2),p_+= G_1p_1+G_2p_2/√(G_1^2+G_2^2), q_- =G_1q_2-G_2q_1/√(G_1^2+G_2^2),p_-= G_1p_2-G_2p_1/√(G_1^2+G_2^2). In the representation of the two new hybrid modes, the Hamiltonian H_ lin can be expressed asH_lin/ħ = Δ a^†a+Ω _+/2( q_+^2+p_+^2) +Ω _-/2(q_-^2+p_-^2)+G_qq_+q_-+G_pp_+p_-+G_+( a^†+a) q_+ -G_xG_1G_2q_+^2-G_1G_2q_-^2/G_1^2+G_2^2 , where the resonance frequencies of the two hybrid modes areΩ _+=Ω _1G_1^2+Ω _2G_2^2/ G_1^2+G_2^2,Ω _-=Ω _1G_2^2+Ω _2G_1^2/G_1^2+G_2^2.In addition, the optomechanical coupling strength between the cavity mode a and the hybrid mode q_+ is given by G_+=√(G_1^2+G_2^2), and the other two coupling strengths between the two modes q_± are given by G_q =( Ω _2-Ω _1) G_1G_2-k_x( G_1^2-G_2^2) /G_1^2+G_2^2,G_p =( Ω _2-Ω _1) G_1G_2/ G_1^2+G_2^2. It can be seen from Eq. (<ref>) that, when Ω_1=Ω_2 and G_1=-G_2, we have G_q=G_p=0, and thus the mode q_- is decoupled from both the mode q_+ and the cavity mode a. In this case, the mode q_- becomes a dark mode, and it cannot be cooled via the cavity-field cooling channel. In order to realize the simultaneous ground-state cooling of the two modes q_1 and q_2, we can take different optical tweezers powers, i.e., P_tw^(1)≠ P_tw^(1), then the dark-mode effect is broken.In Fig. <ref>(a), we plot the final mean phonon numbers n̅_1 and n̅_2 for the two mechanical modes as functions of the scaled detuning Δ/Ω_1 in the nondegenerate-mechanical-mode case, Ω _2≈ 0.75Ω _1. In this case, we obtain the mechanical frequencies Ω_x,y,z/2π≈(324,366,130) kHz for the center-of-mass motion of the particle 1. For this system, the nanoparticle is levitated in a vacuum, and hence it is highly isolated from the environment, resulting in a high Q-factor exceeding 10^8 <cit.>. Under these parameters, the simultaneous ground-state cooling of the two mechanical modes can be realized (n̅_1,n̅_2<1). In particular, the optimal cooling of the mode x_j for the jth particle appears around the red-sideband resonances: Δ/Ω_j≈1. Note that the slightly shifts of the resonance point are caused by the couplings among the optical modes and two mechanical modes. We point out that the present cooling scheme is essentially a sideband cooling. Therefore, we need to investigate the dependence of the cooling performance on the sideband resolution condition. To this end, we plot in Fig. <ref>(b) the final mean phonon numbers n̅_1 and n̅_2 as functions of the scaled cavity-field decay rate κ/Ω_1. Here we can see that the final mean phonon numbers n̅_1 and n̅_2 firstly decrease and then increase with the increase of the decay rate κ.In this system, there exists a coupling between the two mechanical modes with the coupling strength G_x. Below, we investigate how does the G_x coupling affect the cooling results of the two mechanical modes. Firstly, we point out that the particle-particle coupling strength used in Fig. <ref> is G_x≈-0.046Ω_1, where the distance D≈2.5λ between the two particles. It can be confirmed from Fig. <ref> that the chosen parameters satisfy the far-field approximation well. In addition, it can be seen from Eq. (<ref>) that the particle-particle coupling strength G_x can be adjusted by the powers P_tw^(1) and P_tw^(2) of the two tweezers and the distance D of the two particles. To further elucidate this point, in Fig. <ref> we plot the particle-particle coupling strength G_x versus the powers P_tw^(1) and P_tw^(2) of the two tweezers. Figure <ref> exhibits that the strength G_x increases with the increase of the two powers P_tw^(1) and P_tw^(2), which is consistent with the phenomenon that the optical binding force between the two levitated particles increases with the powers of the two tweezers. Moreover, we find G_x can be adjusted from -41.8 kHz to -110.2 kHz and the scaled coupling G_x/Ω_1∈[-0.1,-0.03]. In addition, the coupling strength is related to the size of the nanoparticles. Since the particle-particle coupling provides a channel for the exchange of thermal excitations between the two mechanical modes x_1 and x_2, it is interesting to analyze the dependence of the cooling results on the coupling strength G_x. In Fig. <ref>, we plot the final mean phonon numbers n̅_1 and n̅_2 as functions of the scaled particle-particle coupling strength G_x/Ω_1. Figure <ref> shows that both the final mean phonon numbers n̅_1 and n̅_2 ofparticles 1 and 2 increase with the increase of the absolute value of the particle-particle coupling |G_x|,which means that the increase of the absolute value of the particle-particle coupling strength |G_x| will reduce the cooling efficiency of the two modes x_1 and x_2.§ SIMULTANEOUS COOLING OF X- AND Z-DIRECTION MOTIONS In Sec. <ref>, we considered the special case where the distance between the two particles takes special values, resulting in the decoupling of the cavity mode a from the z-direction motions of the two particles. However, when the distance between the two particles does not take these special positions, the cavity mode a will couple to the z-direction motionsof the two particles. In the following, we will analyze the cooling of both x- and z-direction motions of the two particles in this case.For a general case, the system is described by Hamiltonian (<ref>). Here, there exist nonlinear coupling terms between the cavity mode and the x-direction motions of the two particles. In particular, we consider the case where the driving [the Ω̃ term in Eq. (<ref>)] of the cavity mode is strong enough, then we can linearize the system and obtain the linearized Langevin equations as𝐮̇^'( t) =𝐀^'𝐮^'( t) +𝐍^'( t).Here the fluctuation operator vector is𝐮^'( t)= ( δ q_1x , δ p_1x , δ q_2x , δ p_2x ,δ q_1z , δ p_1z, δ q_2z , δ p_2z , δ X , δ Y ) ^T,where the mechanical quadratures are introduced as √(2)δ q_jx=x_j/x_j,zpf, √(2)δ p_jx=P_jx/p_jx,zpf, √(2)δ q_jz=z_j/z_j,zpf, and √(2)δ p_jz=P_jz/p_jz,zpf with j=1,2. In Eq. (<ref>), the noise operator vector is defined by𝐍^'(t) = (0,f_th^(1x)(t), 0,f_th^(2x)(t),0,f_th^(1z)(t),0,f_th^(2z)(t),√( 2κ)δ X_in( t) ,√(2κ)δ Y_ in( t))^T,and the coefficient matrix is given by 𝐀^'=( [ 0 ω̃_1x 0 0 0 0 0 0 0 0;-ω̃_1x-γ _1x G_x 0 0 0 0 0 -√(2 )A_1 √(2)B_1; 0 0 0 ω̃_2x 0 0 0 0 0 0; G_x 0-ω̃_2x-γ _2x 0 0 0 0 -√(2 )A_2 √(2)B_2; 0 0 0 0 0 ω̃_1z 0 0 0 0; 0 0 0 0-ω̃_1z-γ _1z G_z 0 -√(2 )C_1-√(2)D_1; 0 0 0 0 0 0 0 ω̃_2z 0 0; 0 0 0 0 G_z 0-ω̃_2z-γ _2z -√(2 )C_2-√(2)D_2;-√(2)B_1 0-√(2)B_2 0-√(2)D_1 0-√(2)D_2 0-κ +Δ̃;-√(2)A_1 0-√(2)A_2 0-√(2)C_1 0-√(2)C_2 0 -Δ̃-κ ] ). In Eq. (<ref>), we have defined G_x=2k_xx_1,zpfx_2,zpf/ħ , G_z=2k_zz_1 ,zpfz_2,zpf/ħ , G_x_j=√(2)g̃_x_jx_j,zpf, G_z_j=i√(2)g̃_z_jz_j,zpf, and G_ax_j=√(2)g̃_ax_j⟨ a^†⟩ x_j,zpf, then the linearized optomechanical-coupling strengths are given by G̃_jx=G_x_j+G_ax_j and G̃_jz=G_z_j. These complex coupling strengths can be divided into real and imaginary parts, namely, G̃_jx=A_j+iB_j and G̃_jz=C_j+iD_j, where A_j, B_j, C_j, and D_j have been introduced in Eq. (<ref>). These linearized coupling strengths depend on the semi-classical motion, which are governed by the semi-classical equations of motion. In the steady-state case, the average values of the system operators can be obtained as ⟨ a⟩ = iΩ̃ ^∗+iG_x_1^∗⟨ x_1⟩+iG_x_2^∗⟨ x_2⟩+iG_z_1^∗⟨ z_1⟩+iG_z_2^∗⟨ z_2⟩/-iΔ̃-κ,⟨ x_1⟩ = -G_ax_1 ⟨ a⟩+G_x⟨ x_2⟩-R̃_1-2Im [ G_x_1⟨ a⟩] /ω̃_1x,⟨ x_2⟩ = -G_ax_2 ⟨ a⟩+G_x⟨ x_1⟩ +R̃_2-2Im [ G_x_2⟨ a⟩] /ω̃_2x,⟨ z_1⟩ = G_z⟨ z_2⟩-2 Im[ G_z_1⟨ a⟩] /ω̃_1z,⟨ z_2⟩ = G_z⟨ z_1⟩-2 Im[ G_z_2⟨ a⟩] /ω̃_2x, where Δ̃=Δ ^'+g̃_ax_1x_1,zpf ⟨ x_1⟩ +g̃_ax_2x_2,zpf⟨ x_2⟩ and R̃_j=√(2)R̃x_j,zpf.Based on the linearized Langevin equations, we can derive an effective Hamiltonian asH_lin^'/ħ = Δ̃δ a^†δ a+∑_l[ω̃_l/2( δ q_l^2+δ p_l^2) +( G̃_lδ a+ G̃_l^∗δ a^†) δ q_l] -G_xδ q_1xδ q_2x-G_zδ q_1zδ q_2z,where l= 1x, 2x, 1z, 2z.It can be seen from Eq. (<ref>) that, the cavity mode δ a is coupled to the four modes δ q_1x, δ q_1z, δ q_2x, and δ q_1z. Meanwhile, the modes δ q_1x and δ q_1z are coupled to the modes δ q_2x and δ q_2z, respectively. To investigate the dependence of the cooling performance of the four mechanical modes on the powers of the two tweezers, in Fig. <ref>, we plot the final mean phonon number n̅_1x, n̅_1z, n̅_2x, and n̅_2z as functions of P_tw^(2)when P_tw^(1)=0.8 W. Here, we can see that all the modes cannot be cooled around the identical power point P_tw^(1)≈ P_tw^(2). The phenomenon can be explained based on the dark-mode effect. In this five-mode system, when P_tw^(1)=P_tw^(2), we have ω̃_1x=ω̃_2x and ω̃_1z=ω̃_2z. Meanwhile, the coupling strengths satisfy the relations: G_x_1=-G_x_2, G_ax_1=-G_ax_2, and G_z_1=G_z_2. To analyze the dark-mode effect, we introduce the creation and annihilation operators b_l^†=(δ q_l-iδ p_l)/√(2) and b_l=(δ q_l+iδ p_l)/√(2) of these mechanical modes. We further define four hybrid mechanical modes as B_1+= G̃_1xb_1x+G̃_2xb_2x/√(|G̃_1x|^2+|G̃_2x|^2),B_1-=G̃_2x^*b_1x-G̃_1x^*b_2x/√(|G̃_1x|^2+|G̃_2x|^2), B_2+= G̃_1zb_1z+G̃_2zb_2z/√(|G̃_1z|^2+|G̃_2z|^2),B_2-=G̃_2z^*b_1z-G̃_1z^*b_2z/√(|G̃_1z|^2+|G̃_2z|^2).In the hybrid-mode representation, the Hamiltonian (<ref>) can be re-expressed as a new form, which is not presented here because of its complicated form. By analyzing the Hamiltonian in the hybrid-mode representation, we find that the dark-mode effect appears under the conditions ω̃_1x(z)=ω̃_2x(z) and G̃_1x(z)^2=G̃_2x(z)^2. Accordingly, in the identical power case under consideration, the x(z) modes of the two particles have the same frequency ω̃_1x(z)=ω̃_2x(z) and absolute value of coupling strength |G̃_1x(z)|=|G̃_2x(z)|,and then the Hamiltonian is reduced toH_lin^'/ħ = Δδ a^†δ a+∑_j=1,2ω̃ _j(B_j+^†B_j++B_j-^†B_j-)+∑_j=1,2(-1)^j[(ξ_j B_j+^†+ξ_j^* B_j+)^2 -(ξ_j^*B_j-^†+ξ_j B_j-)^2]+[G̃_1x(ζ_1 B_1++ζ_1^* B_1+^† )a+G̃_1z(ζ_2 B_2++ζ_2^* B_2+^† )a+H.c.],where we have ignored the constant term. In Eq. (<ref>), the normalized resonance frequencies are ω̃_1=ω̃_1x and ω̃_2=ω̃_1z, and other parameters used are defined as ζ_1=G_x/2G̃_1x^*, ζ_2=G_z/2G̃_1z, ξ_1=√(2)G̃_1x^*/|G̃_1x|, and ξ_2=√(2)G̃_1z^*/|G̃_1z|. We can see from Eq. (<ref>) that the two modes B_1- and B_2- are decoupled from other modes, and hence become the dark modes. Therefore, the cooling of the four mechanical modes will be significantly suppressed. In order to break the dark-mode effect, we need to consider the case P_tw^(1)≠ P_tw^(2), then we have |g_z|>|g_x| and ω̃_z<ω̃_x for our parameters. In the dark-mode-breaking case, we want to investigate the influence of the detuning on the cooling efficiency for these motions. In Fig. <ref>(a), we plot the final mean phonon numbers n̅_1x, n̅_2x, n̅_1z, and n̅_2z in the four mechanical modes versus the scaled detuning Δ/ω̃_1x.We find in Fig. <ref>(a) that the simultaneous ground-state cooling of the four mechanical modes can be realized. In particular, the cooling performance of the x-direction motions are better than those of the two modes z_1 and z_2. Moreover, we can see from Fig. <ref>(a) that, for the two modes x_1 and x_2, the optimal cooling appears respectively around Δ/ω̃_1x≈1 and Δ/ω̃_2x≈0.75, corresponding to the red-sideband resonances. In this case, the effective mode temperatures of the x-direction motions of the two nanoparticles are cooled to T_x∼5×10^-3 mK (corresponding to ω̃_x∼2.4 MHz), and the two z-mode mechanical oscillators are simultaneously cooled to T_z∼7×10^-3 mK (corresponding to ω̃_z∼0.9 MHz). We also investigate the dependence of the final mean phonon numbers n̅_1x, n̅_2x, n̅_1z, and n̅_2z on the scaled decay rate κ/ω̃_1x for the cavity field, as shown by Fig. <ref>(b). Here, we can see that the simultaneous ground-state cooling of the four modes can be realized. In particular, the cooling performances of the two x-direction modes x_1 and x_2 are better than those of the two z-direction modes z_1 and z_2. Meanwhile, we find that the main cooling region exists in the resolved-sideband regime. § CONCLUSIONIn conclusion, we have developed a theoretical model for describing the simultaneous ground-state cooling of the motions of two levitated nanoparticles trapped in a cavity via coherent scattering. We have found that, different from the single-levitated particle case, the scattered light will induce the mechanical effect between the particles, which shifts the equilibrium position of the particles and cause the coupling between two particles. We have derived the Hamiltonian of the system and analyze the interactions in various cases. When the two nanoparticles are located at the nodes of the cavity, the system is reduced to a three-mode loop-coupled model, in which the cavity mode is coupled to the x-direction motional modes of the two particles, and the two x modes are coupled with each other via the position-position coupling. In this case, we have found that the dark-mode effect appears when two tweezers have the same power, and then the effective cooling of the two mechanical oscillations is suppressed. In particular, the simultaneous ground-state cooling of the x-direction motion of the two particles can be realized by breaking the dark-mode effect. In addition, when the particles are not placed at the nodes, the system is reduced to a five-mode model, in which both the x- and z-direction motions are coupled to the cavity mode, and there exist both the x-x coupling and z-z coupling between the two mechanical modes. In this case, we have also found that the dark-mode effect exists in the identical-power case, and that both the x- and z-direction motions can be significantly cooled by breaking the dark-mode effect. This work paves the way to quantum manipulation of multiple levitated nanoparticles. J.-Q.L. was supported in part by National Natural Science Foundation of China (Grants No. 12175061, No. 12247105, and No. 11935006), the Science and Technology Innovation Program of Hunan Province (Grant No. 2021RC4029), and Hunan Provincial Major Science and Technology Program (Grant No. 2023ZJ1010).99 Aspelmeyer2014 M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, Cavity optomechanics, Rev. Mod. Phys. 86, 1391 (2014).MAPR2014 M. Metcalfe, Applications of Cavity Optomechanics, Appl. Phys. Rev. 1, 031105 (2014). JRPP2020 J. Millen, T. S. Monteiro, R. Pettit, and A. N. Vamivakas, Optomechanics with levitated particles, Rep. Prog. Phys. 83, 026401 (2020).CSC2021 C. Gonzalez-Ballestero, M. Aspelmeyer, L. Novotny, R. Quidant, and O. Romero-Isart, Levitodynamics: Levitation and control of microscopic objects in vacuum, Science 374, eabg3027 (2021).Gaxv2307 G. Winstone, M. Bhattacharya, A. A. Geraci, T. Li, P. J. Pauzauskie, and N. Vamivakas, Levitated optomechanics: A tutorial and perspective, arXiv: 2307.11858. Ask1 A. Ashkin, Acceleration and Trapping of Particles by Radiation Pressure, Phys. Rev. Lett. 24, 156 (1970).Ask2 A. Ashkin and J. Dziedzic, Optical Levitation by Radiation Pressure, Appl. Phys. Lett. 19, 283 (1971).Ask3 A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm, and S. Chu, Observation of a single-beam gradient force optical trap for dielectric particles, Opt. Lett. 11, 288 (1986). ARMP2009W. D. Phillips, Nobel lecture: Laser cooling and trapping of neutral atoms, Rev. Mod. Phys. 70, 721 (1998). TN2020 T. Delord, P. Huillery, L. Nicolas, and G. Hétet, Spin-cooling of the motion of a trapped diamond, Nature (London) 580, 56 (2020).TS2010 T. Li, S. Kheifets, D. Medellin, and M. G. Raizen, Measurement of the instantaneous velocity of Brownian particle, Science 328, 1673 (2010).US2020 U. Delić, M. Reisenbauer, K. Dare, D. Grass, V. Vuletić, N. Kiesel, and M. Aspelmeyer, Cooling of a levitated nano-particle to the motional quantum ground state, Science 367, 892 (2020).LN2021 L. Magrini, P. Rosenzweig, C. Bach, A. Deutschmann-Olek, S. G. Hofer, S. Hong, N. Kiesel, A. Kugi, and M. Aspelmeyer, Real-time optimal quantum control of mechanical motion at room temperature, Nature (London) 595, 373 (2021).FN2021F. Tebbenjohanns, M. L. Mattana, M. Rossi, M. Frimmer, and L. Novotny, Quantum control of a nanoparticle optically levitated in cryogenic free space, Nature (London) 595, 378 (2021). FPRA2017 F. Monteiro, S. Ghosh, A. G. Fine, and D. C. Moore, Optical levitation of 10-ng spheres with nano-g acceleration sensitivity, Phys. Rev. A 96, 063841 (2017).APRA2018 A. D. Rider, C. P. Blakemore, G. Gratta, and D. C. Moore, Single-beam dielectric-microsphere trapping with optical heterodyne detection, Phys. Rev. A 97, 013842 (2018).YPRL2020 Y. Zheng, L.-M. Zhou, Y. Dong, C.-W. Qiu, X.-D. Chen, G.-C. Guo, and F.-W. Sun, Robust Optical-Levitation-Based Metrology of Nanoparticle’s Position and Mass, Phys. Rev. Lett. 124, 223603 (2020). RPRL2018R. Reimann, M. Doderer, E. Hebestreit, R. Diehl, M. Frimmer, D. Windey, F. Tebbenjohanns, and L. Novotny, GHz Rotation of an Optically Trapped Nanoparticle in Vacuum, Phys. Rev. Lett. 121, 033602 (2018).JPRL2018J. Ahn, Z. Xu, J. Bang, Y.-H. Deng, T. M. Hoang, Q. Han, R.-M. Ma, and T. Li, Optically Levitated Nanodumbbell Torsion Balance and GHz Nanomechanical Rotor, Phys. Rev. Lett. 121, 033603 (2018). PNAS2010 D. E. Chang, C. A. Regal, S. B. Papp, D. J. Wilson, J. Ye, O. Painter, H. J. Kimble, and P. Zoller, Cavity opto-mechanics using an optically levitated nanosphere, Proc. Natl. Acad. Sci. U.S.A. 107, 1005 (2010).NJP2010 O. Romero-Isart, M. L. Juan, R. Quidant, and J. I. Cirac, Toward quantum superposition of living organisms, New J. Phys. 12, 033015 (2010).TLNP2011 T. Li, S. Kheifets, and M. G. Raizen, Millikelvin cooling of an optically trapped microsphere in vacuum, Nat. Phys. 7, 527 (2011).APRL2010 A. A. Geraci, S. B. Papp, and J. Kitching, Short-Range Force Detection Using Optically Cooled Levitated Microspheres, Phys. Rev. Lett. 105, 101101 (2010).JPRL2012 J. Gieseler, B. Deutsch, R. Quidant, and L. Novotny, Subkelvin Parametric Feedback Cooling of a Laser-Trapped Nanoparticle, Phys. Rev. Lett. 109, 103603 (2012).NPNAS2013 N. Kiesel, F. Blaser, U. Delić, D. Grass, R. Kaltenbaek, and M. Aspelmeyer, Cavity cooling of an optically levitated submicron particle, Proc. Natl. Acad. Sci. U.S.A. 110, 14180 (2013).JNN2014 J. Millen, T. Deesuwan, P. Barker, and J. Anders, Nanoscale temperature measurements using non-equilibrium Brownian dynamics of a levitated nanosphere, Nat. Nanotechnol. 9, 425 (2014).TNP2023 T. F. Kuang, R. Huang, W. Xiong, Y. L. Zuo, X. Han, F. Nori, C.-W. Qiu, H. Luo, H. Jing, and G. Z. Xiao, Nonlinear multi-frequency phonon lasers with active levitated optomechanics, Nat. Phys. 19, 414 (2023). Uaxv1902 U. Delić, D. Grass, M. Reisenbauer, T. Damm, M. Weitz, N. Kiesel, and M. Aspelmeyer, Levitated cavity optomechanics in high vacuum, Quantum Sci. Technol. 5, 025006 (2020). IPRL2007 I. Wilson-Rae, N. Nooshi, W. Zwerger, and T. J. Kippenberg, Theory of Ground State Cooling of a Mechanical Oscillator Using Dynamical Backaction, Phys. Rev. Lett. 99, 093901 (2007). FPRL2007 F. Marquardt, J. P. Chen, A. A. Clerk, and S. M. Girvin, Quantum Theory of Cavity-Assisted Sideband Cooling of Mechanical Motion, Phys. Rev. Lett. 99, 093902 (2007).JN2011 J. Chan, T. P. M. Alegre, A. H. Safavi-Naeini, J. T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, and O. Painter, Laser cooling of a nanomechanical oscillator into its quantum ground state, Nature (London) 478, 89 (2011).CPRL2019 C. Sommer and C. Genes, Partial Optomechanical Refrigeration via Multimode Cold-Damping Feedback, Phys. Rev. Lett. 123, 203605 (2019).LiuPRA2022 Y.-H. Liu, X.-L. Yin, J.-F. Huang, and J.-Q. Liao, Accelerated ground-state cooling of an optomechanical resonator via shortcuts to adiabaticity, Phys. Rev. A 105, 023504 (2022).RPRA2018 R. Diehl, E. Hebestreit, R. Reimann, F. Tebbenjohanns, M. Frimmer, and L. Novotny, Optical levitation and feedback cooling of a nanoparticle at subwavelength distances from a membrane, Phys. Rev. A 98, 013851 (2018).GPRL2019 G. P. Conangla, F. Ricci, M. T. Cuairan, A. W. Schell, N. Meyer, and R. Quidant, Optimal Feedback Cooling of a Charged Levitated Nanoparticle with Adaptive Control, Phys. Rev. Lett. 122, 223602 (2019).FPRL2019 F. Tebbenjohanns, M. Frimmer, A. Militaru, V. Jain, and L. Novotny, Cold Damping of an Optically Levitated Nanoparticle to Microkelvin Temperatures, Phys. Rev. Lett. 122, 223601 (2019). PRA2010 P. F. Barker and M. N. Shneider, Cavity cooling of an optically trapped nanoparticle, Phys. Rev. A 81, 023826 (2010).PRA2011O. Romero-Isart, A. C. Pflanzer, M. L. Juan, R. Quidant, N. Kiesel, M. Aspelmeyer, and J. I. Cirac, Optically levitating dielectrics in the quantum regime: Theory and protocols, Phys. Rev. A 83, 013803 (2011).PNC2013 P. Asenbaum, S. Kuhn, S. Nimmrichter, U. Sezer, and M. Arndt, Cavity cooling of free silicon nanoparticles in high vacuum, Nat. Commun. 4, 2743 (2013).JPRL2015 J. Millen, P. Z. G. Fonseca, T. Mavrogordatos, T. S. Monteiro, and P. F. Barker, Cavity Cooling a Single Charged Levitated Nanosphere, Phys. Rev. Lett. 114, 123602 (2015).PPRL2016 P. Z. G. Fonseca, E. B. Aranas, J. Millen, T. S. Monteiro, and P. F. Barker, Nonlinear Dynamics and Strong Cavity Cooling of Levitated Nanoparticles, Phys. Rev. Lett. 117, 173602 (2016).APRR2022 A. Ranfagni, K. Børkje, F. Marino, and F. Marin, Two-dimensional quantum motion of a levitated nanosphere, Phys. Rev. Research 4, 033051 (2022).NPRL1902 N. Meyer, A. de los Ríos Sommer, P. Mestres, J. Gieseler, V. Jain, L. Novotny, and R. Quidant, Resolved-sideband cooling of a levitated nanoparticle in the presence of laser phase noise, Phys. Rev. Lett. 123, 153601 (2019). PPRA2009 P. Rabl, C. Genes, K. Hammerer, and M. Aspelmeyer, Phase-noise induced limitations on cooling and coherent evolution in optomechanical systems, Phys. Rev. A 80, 063819 (2009).ANJP2012 A. M. Jayich, J. C. Sankey, K. Børkje, D. Lee, C. Yang, M. Underwood, L. Childress, A. Petrenko, S. M. Girvin, and J. G. E. Harris, Cryogenic optomechanics with a Si_3N_4 membrane and classical laser noise, New J. Phys. 14, 115018 (2012).ANJP2013 A. H. Safavi-Naeini, J. Chan, J. T. Hill, S. Gröblacher, H. Miao, Y. Chen, M. Aspelmeyer, and O. Painter, Laser noise in cavity-optomechanical cooling and thermometry, New J. Phys. 15, 035007 (2013). UPRL2019 U. Delić, M. Reisenbauer, D. Grass, N. Kiesel, V. Vuletic, and M. Aspelmeyer, Cavity Cooling of a Levitated Nanosphere by Coherent Scattering, Phys. Rev. Lett. 122, 123602 (2019).DPRL2019 D. Windey, C. Gonzalez-Ballestero, P . Maurer, L. Novotny, O. Romero-Isart, and R. Reimann, Cavity-Based 3D Cooling of a Levitated Nanoparticle via Coherent Scattering, Phys. Rev. Lett. 122, 123601 (2019).CPRA2019 C. Gonzalez-Ballestero, P. Maurer, D. Windey, L. Novotny, R. Reimann, and O. Romero-Isart, Theory for cavity cooling of levitated nanoparticles via coherent scattering: Master equation approach, Phys. Rev. A 100, 013805 (2019).JNP2023 J. Piotrowski, D. Windey, J. Vijayan, C. Gonzalez-Ballestero, A. de los Ríos Sommer, N. Meyer, R. Quidant, O. Romero-Isart, R. Reimann, and L. Novotny, Simultaneous ground-state cooling of two mechanical modes of a levitated nanoparticle, Nat. Phys.19, 1009 (2023).VPRL2000 V. Vuletić and S. Chu, Laser Cooling of Atoms, Ions, or Molecules by Coherent Scattering, Phys. Rev. Lett. 84, 3787 (2000). NC2021 A. de los Ríos Sommer, N. Meyer, and R. Quidant, Strong optomechanical coupling at room temperature by coherent scattering, Nat. Commun. 12, 276 (2021).Kaxv2305 K. Dare, J. J. Hansen, I. Coroli, A. Johnson, M. Aspelmeyer, and U. Delić, Linear Ultrastrong Optomechanical Interaction, arXiv:2305.16226. CPRL2017 C. Marletto and V. Vedral, Gravitationally induced entanglement between two massive particles is sufficient evidence of quantum effects in gravity, Phys. Rev. Lett. 119, 240402 (2017).DQST2021 D. C. Moore and A. A. Geraci, Searching for new physics using optically levitated sensors, Quantum Sci. Technol. 6, 014008 (2021).RPRL2021 R. Zhao, A. Manjavacas, F. J. García de Abajo, and J. B. Pendry, Rotational quantum friction, Phys. Rev. Lett. 109, 123604 (2012).YO2018 Y. Arita, E. M. Wright, and K. Dholakia, Optical binding of two cooled micro-gyroscopes levitated in vacuum, Optica 5, 910 (2018).VO2021 V. Svak, J. Flajšmanová, L. Chvátal, M. Šiler, A. Jonáš, J. Ježek, S. H. Simpson, P. Zemánek, and O. Brzobohatý, Stochastic dynamics of optically bound matter levitated in vacuum, Optica 8, 220 (2021).TPRR2023 T. W. Penny, A. Pontin, and P. F. Barker, Sympathetic cooling and squeezing of two colevitated nanoparticles, Phys. Rev. Research 5, 013070 (2023). SA2020 S. Liu, Z.-q. Yin, and T. Li, Prethermalization and nonreciprocal phonon transport in a levitated optomechanical array, Adv. Quantum Technol. 3, 1900099 (2020).JPR2023 J. Yan, X. Yu, Z. V. Han, T. Li, and J. Zhang, On-demand assembly of optically-levitated nanoparticle arrays in vacuum, Photon. Res. 11, 600 (2023).YSC2023Y. Bao, S. S. Yu, L. Anderegg, E. Chae, W. Ketterle, K.-K. Ni, and J. M. Doyle, Dipolar spin-exchange and entanglement between molecules in an optical tweezer array, Science 382, 1138 (2023).CSC2023 C. M. Holland, Y. Lu, and L. W. Cheuk, On-demand entanglement of molecules in a reconfigurable optical tweezer array, Science 382, 1143 (2023). MPRL1989 M. M. Burns, J.-M. Fournier, and J. A. Golovchenko, Optical binding, Phys. Rev. Lett. 63, 1233 (1989).VARB2006 V. Karásek, K. Dholakia, and P. Zemánek, Analysis of optical binding in one dimension, Appl. Phys. B 84, 149 (2006).KRMP2010 K. Dholakia and P. Zemánek, Colloquium: Gripped by light: Optical binding, Rev. Mod. Phys. 82, 1767 (2010).JS2022 J. Rieser, M. A. Ciampini, H. Rudolph, N. Kiesel, K. Hornberger, B. A. Stickler, M. Aspelmeyer, and U. Delić, Tunable light-induced dipole-dipole interaction between optically levitated nanoparticles, Science 377, 987 (2022).Lbook2012 L. Novotny and B. Hecht, Principles of Nano-optics (Cambridge University Press, Cambridge, Eangland, 2012). MPRA2018 M. A. Abbassi and K. Mehrany, Inclusion of the backaction term in the total optical force exerted upon rayleigh particles in nonresonant structures, Phys. Rev. A 98, 013806 (2018). Daxv2203 D. De Bernardis, G. Rastelli, I. Carusotto, and V. Scarani, Optical-force-mediated coupling between levitated nanospheres can go ultrastrong, arXiv:2203.10126. Cbook2000 C. W. Gardiner and P. Zoller, Quantum Noise (Springer, Berlin, 2000). DPRL2007 D. Vitali, S. Gigan, A. Ferreira, H. R. Bohm, P. Tombesi, A. Guerreiro, V. Vedral, A. Zeilinger, and M. Aspelmeyer, Optomechanical Entanglement between a Movable Mirror and a Cavity Field, Phys. Rev. Lett. 98, 030405 (2007).CNJP2008 C. Genes, D. Vitali, and P. Tombesi, Simultaneous cooling and entanglement of mechanical modes of amicromirror in an optical cavity, New J. Phys. 10, 095009 (2008).DPRA2020 D.-G. Lai, J.-F. Huang, X.-L. Yin, B.-P. Hou, W. Li, D. Vitali, F. Nori, and J.-Q. Liao, Nonreciprocal ground-state cooling of multiple mechanical resonators, Phys. Rev. A 102, 011502(R) (2020).DPRL2022 D.-G. Lai, J.-Q. Liao, A. Miranowicz, and F. Nori, Noise-Tolerant Optomechanical Entanglement Via Synthetic Magnetism, Phys. Rev. Lett. 129, 063602 (2022).JPRA2022a J. Huang, D.-G. Lai, C. Liu, J.-F. Huang, F. Nori, and J.-Q. Liao, Multimode optomechanical cooling via general dark-mode control, Phys. Rev. A 106, 013526 (2022).Jarx2023 J. Huang, C. Liu, X.-W. Xu, and J.-Q. Liao, Dark-Mode Theorems for Quantum Networks, arXiv:2312.06274.JNP2013 J. Gieseler, L. Novotny, and R. Quidant, Thermal nonlinearities in a nanomechanical oscillator, Nat. Phys. 9, 806 (2013). | http://arxiv.org/abs/2312.15898v1 | {
"authors": [
"Yi Xu",
"Yu-Hong Liu",
"Cheng Liu",
"Jie-Qiao Liao"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231226062335",
"title": "Simultaneous ground-state cooling of two levitated nanoparticles by coherent scattering"
} |
Application of the Green function formalism to the interplay between avalanche and multiphoton ionization induced by optical pulses Stefan Nolte January 14, 2024 =================================================================================================================================== Continuously-observed event occurrences, often exhibit self- and mutually-exciting effects, which can be well modeled using temporal point processes. Beyond that, these event dynamics may also change over time, with certain periodic trends.We propose a novel variational auto-encoder to capture such a mixture of temporal dynamics.More specifically, the whole time interval of the input sequence is partitioned into a set of sub-intervals. The event dynamics are assumed to be stationary within each sub-interval, but could be changing across those sub-intervals. In particular, we use a sequential latent variable model to learn a dependency graph between the observed dimensions, for each sub-interval. The model predicts the future event times, by using the learned dependency graph to remove the non-contributing influences of past events. By doing so, the proposed model demonstrates its higher accuracy in predicting inter-event times and event types for several real-world event sequences, compared with existing state of the art neural point processes.§ INTRODUCTIONThere has been growing interests in modeling and understanding temporal dynamics in event occurrences. For instance, modeling customer behaviors and interactions, is crucial for recommendation systems and online social media, to improve the resource allocation and customer experience <cit.>.These event occurrences usually demonstrate heterogeneous dynamics. On one aspect, individuals usually reciprocate in their interactions with each other (reciprocity). For example, if Alice sends an email to Bob, then Bob is more likely to send an email to Alice soon afterwards. On the other aspect, long event sequences often exhibit a certain amount of periodic trends.For instance, during working time, individuals are more likely to reciprocate in the email interactions with their colleagues, but such mutually exciting effects will become weaker during non-working time, as illustrated in Fig. <ref>. Temporal point processes (TPPs), such as Hawkes processes (HPs) <cit.>, are in particular well-fitted to capture the reciprocaland clustering effects in event dynamics. Nonetheless, the conventional HPs cannot adequately capture the latent state-transition dynamics.Recently, neural temporal point processes (neural TPPs) demonstrate a strongcapability in capturing long-range dependencies in event sequences using neural networks <cit.>, attention mechanisms <cit.> and neural density estimation <cit.>.These neural TPPs often use all the past events to predict a future event's occurring time, and thus cannot remove the disturbances of the non-contributing events. To mitigate this defect, some recent works <cit.> formulate the neural temporal point processes by learning a static graph to explicitly capture dependencies among event types. They hence can improve the accuracy in predicting future event time, by removing the non-contributing influences of past events via the learned graph. Nonetheless, the dependencies between event types, may also change over time.Using a static graph, the neural TPPs will retrieve a dependency graph averaged over time. To fill the gap, some main contributions are made in this paper: * We propose to learn a dynamic graph between the event types of an input sequence,from a novel variational auto-encoder (VAE) perspective <cit.>. More specifically, we use regularly-spaced intervals to capture the different states, andassumestationary dynamics within each sub-interval. In particular, the dependencies between two event types, is captured using a latent variable, which allows to evolve over sub-intervals. We formulate the variational auto-encoder framework, by encoding a latent dynamic graph among event types, from observed sequences. The inter-event waiting times, are decoded using log-normal mixture distributions.Via the learned graph, the non-contributing influences of past events, can be effectively removed. * The final experiments demonstrate the improved accuracy of the proposed method in predicting event time and types, compared against the existing closely-related methods. The interpretability of the dynamic graph estimated by the proposed method, is demonstrated with both New York Motor Vehicle Collision data.§ PRELIMINARYMultivariate Point Processes. Temporal point processes (TPPs) are concerned with modeling random event sequences in continuous time domain. Let 𝒮≡{(t_i,v_i)}_i=1^L denote a sequence of events, with t_i≥ 0 being the timestamp and v_i∈[1,…,U] being the type of i-th event. In addition, ℋ_t={(t_i,v_i)| t_i<t,(t_i,v_i)∈𝒮} denotes the sequence of historical events occurring up to time t.Multivariate Hawkes processes (MHPs) capture mutually-excitations among event types using the conditional intensity function specified by λ^*_v(t) = μ_v + ∑_u=1^U∑_{j:t_j^u<t}α_(v,u)exp[-(t-t_j^u)/η_(v,u)],where μ_v is the base rate of v-th event type, α_(v,u)>0 captures the instantaneous boost to the intensity due to event t_j^u's arrival, and η_(v,u)>0 determines the influence decay of that event over time.The stationary condition for MHPs requires α_(v,u)η_(v,u)<1.In contrast to MHPs, a mutually regressive point process (MRPP) <cit.> is to capture both excitatory and inhibitory effects among event types. These parametric point processes capture a certain form of dependencies on the historical events by designing the conditional intensity functions accordingly.Despite being simple and useful,these parametric point processes either suffer from certain approximation errors caused by the model misspecifications in practice, or lack the ability to capture long-range dependencies.To address these limitations, some recent advancements <cit.> combine temporal point processes and deep learning approaches to model complicated dependency structure behind event sequences. At a high level, these neural temporal point processes treat each event as a feature, and encode a sequence of events into an history embedding using various deep learning methods including recurrent neural nets (RNNs), gated recurrent units (GRUs), or long-short term memory (LSTM) nets.<cit.> uses a recurrent neural net to extract history embedding from observed past events, and then use the history embedding to parameterize its conditional intensity function. The exponential form of this intensity, admits a closed-form integral Λ^*(t)=∫_0^tλ^*(s)ds, and thus leads to a tractable log-likelihood.<cit.> studied a more sophisticated conditional intensity function, while calculating the log-likelihood involves approximating the integral Λ^*(t) using Monte Carlo methods. <cit.> proposes to model the cumulative conditional intensity function using neural nets, and hence allows to compute the log-likelihood exactly and efficiently. However, sampling using this approach is expensive, and the derived probability density function does not integrate to one. To remedy these issues, <cit.> proposes to directly model the inter-event times using normalizing flows. The neural density estimation method <cit.> not only allows to perform sampling and likelihood computation analytically, but also shows competitive performance in various applications, compared with the other neural TPPs. Variational Auto-Encoder. We briefly introduce the definition of variational auto-encoder (VAE), and refer the readers to <cit.> for more properties.VAE is one of the most successful generative models, which allows to straightforwardly sample from the data distribution p(𝒮). It is in particular useful to model high-dimensional data distribution, for which sampling with Markov chain Monte Carlo is notoriously slow. More specifically, we aim at maximizing the data log-likelihood p(𝒮) under the generative process specified byp(𝒮)=∫ p(𝒮|𝐳,θ)p(𝐳)d𝐳, where 𝐳 is the latent variable, and p(𝐳) denotes the prior distribution, and the observation component p(𝒮|𝐳,θ) is parameterized by θ.Under the VAE framework, the posterior distribution q_ϕ(𝐳|𝒮) can be defined as q_ϕ(𝐳|𝒮)≡𝒩(𝐳 ; f^μ(𝒮;ϕ),f^Σ(𝒮;ϕ)) where 𝒩(·) refer to a normal distribution, the mean f^μ(𝒮;ϕ) and covariance f^Σ(𝒮;ϕ)) are parameterized by neural networks with parameter ϕ. To learn the model parameters, we maximize the evidence lower bound (ELBO) given byℒ(ϕ,θ)= 𝖤_q_ϕ(𝐳|𝒮)[log p(𝒮|θ)] -𝒟_KL[q_ϕ(𝐳|𝒮) || p_(𝐳)], where 𝒟_KL denotes the Kullback–Leibler (KL) divergence. The first term is to make the approximate posterior to produce latent variables 𝐳 that can reconstruct data 𝒮 as well as possible. The second term is to match the approximate posterior of the latent variables to the prior distribution of the latent variables. Using the reparameterization trick <cit.>, we learn ϕ and θ by maximizing the ELBO using stochastic gradient descent aided by automatic differentiation.§ THE PROPOSED MODELGiven a sequence of events 𝒮≡{(t_i,v_i)}_i=1^L, we aim to capture the complicated dependence between event types, using a dynamic graph-structured neural point process. The whole time-interval of the sequence is partitioned into K regularly-spaced sub-intervals with K specified a priori, to approximately represent the different states.We assume the latent graph among the event types, is changing over states, but stationary within each sub-interval, as illustrated in Fig. <ref>(a). Specifically, let [t_k^L,t_k^R) stand for the k-th sub-interval with t_k^L being the start point and t_k^R being the end point. Let the latent variable z_(v,u)^k capture the dependence of v-th event type on u-th event type within k-th sub-interval. For ease of exposition, we denote the set of events occurring within the k-th sub-interval by𝐬^k≡{(t_i,v_i)| t_i∈[t_k^L,t_k^R)}.Note that we model the inter-event time for each event type using log-normal mixtures. Hence, we represent the event sequence of u-th type by 𝒮_u={(t^u_i,τ_i^u)}_i=1^n^u, where t_i^u is i-th event observed in the sequence of u-th event type, τ_i^u=t^u_i-t^u_i-1 denotes the corresponding inter-event time, and the total number of events L=∑_u=1^Un^u. Next we shall explain each component of the variational autoencoder in the following subsections.Prior. We assume that the dependency graph among event types, evolves over sub-intervals. Hence, we use an autoregressive model to capture the the prior probabilities of the latent variables {𝐳_(v,u)^k}_v,u,k. More specifically, the prior distribution of 𝐳_(v,u)^k depends on its previous state 𝐳_(v,u)^k-1 and the event sequence up to time t_k^L (all the events in first k-1 sub-intervals):p_ϕ(𝐳|𝒮)≡∏_k=1^Kp_ϕ(𝐳^k|𝐳^1:k-1,𝐬^1:k). The prior component is specified as follows: for each event of v-th type (t_i^v,m_i^v), where m_i^v represents the auxiliary event mark if available, we embed t_i^v and m_i^v into a fixed-dimensional vector 𝐲_v^t_i∈ℝ^D. Then, we pass the event embedding 𝐲_v^t_i through a fully-connected graph neural network (GNN) to obtain the relation embedding 𝐡^t_i_(v,u),emb between event types v and u: 𝐡_v,1^t_i =f^1_emb(𝐲_v^t_i), v→e:𝐡_(v,u),1^t_i =f_e^1([𝐡_v,1^t_i,𝐡_u,1^t_i]), e→v:𝐡_v,2^t_i =f_v^1(∑_u≠v𝐡_(v,u),1^t_i), v→e: 𝐡_(v,u),emb^t_i =f_e^2([𝐡_v,2^t_i,𝐡_u,2^t_i]), where f(·) denotes a multilayer perceptron (MLP) for each layer of the GNN, 𝐡_v,ℓ^t_i and 𝐡_(v,u),ℓ^t_i represent the node-wise and edge-wise hidden states of the ℓ-th intermediate layer, respectively. The final output of the GNN 𝐡_(v,u),emb^t_i models the relation at time t_i. The GNN architecture is illustrated in Fig. <ref>(a).We need to concatenate all the relation variable, {𝐡_(v,u),emb^t_i} for t_i∈[t^L_k,t^R_k), and transform them into the relation state 𝐡_(v,u),emb^k of k-th sub-interval, using a MLP: 𝐡_(v,u),emb^k =f^2_emb([𝐡_(v,u),emb^t_i]) for t_i∈[t^L_k,t^R_k). A forward recurrent neural network (RNN) is used to capture the dependence of an relation state 𝐡_(v,u),fwd^k on its current embedding 𝐡_(v,u),emb^k, and its previous state 𝐡_(v,u),fwd^k-1: 𝐡_(v,u),fwd^k =RNN_fwd(𝐡_(v,u),emb^k,𝐡_(v,u),fwd^k-1). Finally, we encode 𝐡_(v,u),fwd^k into the logits of the prior distribution for 𝐳_(v,u)^k, using a MLP:p_ϕ(𝐳_(v,u)^k|𝐳^1:k-1,𝐬^1:k) =softmax(f_prior(𝐡_(v,u),fwd^k)). Fig. <ref> shows the prior distribution built upon the foward RNN. Encoder.The posterior distribution of the latent variables q_ϕ(𝐳|𝒮) depends on both the past and future events: q_ϕ(𝐳|𝒮)≡∏_k=1^Kq_ϕ(𝐳^k|𝒮). Hence, the encoder is designed to approximate the distribution of the relation variables using the whole event sequence. To that end, a backward GNN is used to propagate the hidden states 𝐡_(v,u),bwd^k reversely:𝐡_(v,u),bwd^k =RNN_bwd(𝐡_(v,u),emb^k,𝐡_(v,u),bwd^k+1). Finally, we concatenate both the forward state 𝐡_(v,u),fwd^k and the backward state 𝐡_(v,u),bwd^k, and transform them into the logits of the approximate posterior, using a MLP: q_ϕ(𝐳_(v,u)^k|𝒮) =softmax(f_enc([𝐡_(v,u),bwd^k,𝐡_(v,u),fwd^k])). Note that the prior and encoder share parameters, and thus the parameters of the two components are denoted by ϕ. Decoder. The role of the decoder is to predict the inter-event times {τ_i^u}_i=1^n_u for each event type u.In particular, we capture the latent dynamics {ĥ_v^t_i} behind these inter-event times using a graph recurrent neural network (GRNN) specified by v→e:ĥ_(v,u)^t_i =z_(v,u)^k f_e^1([ĥ_v^t_i,ĥ_u^t_i]), for t_i∈[t^L_k,t^R_k), e→v:𝐡̃_v^t_i = ∑_u≠vĥ_(v,u)^t_i, ĥ_v^t_i+1 =GRU(𝐡̃_v^t_i,ĥ_v^t_i), where z_(v,u)^k determines how the u-th event type ĥ_u^t_i influences the v-th event type ĥ_v^t_i+1 through the relation z_(v,u)^k at time t_i+1. The latent embedding ĥ_v^t_i itself evolves over time, using a gated recurrent unit (GRU).Given the dynamic embeddings {ĥ_v^t_i}, we model the inter-event time p(τ_i^u) using a log-normal mixture model <cit.>, p(τ|ω,μ,σ)=∑_c=1^Cω_c 1/τσ_c√(2π)exp(-(logτ-μ_c)^2/2σ_c^2),where ω_c,μ_c,σ_c represent the mixture weights, the mean and the standard deviations of c-th mixture component, respectively.In particular, the parameters of the distribution for each inter-event time τ_i^u is constructed as ω_i^u =softmax(V_ωĥ_u^t_i+β_ω),σ_i^u=exp(V_σĥ_u^t_i+β_σ), μ_i^u =V_μĥ_u^t_i+β_μ,where {V_ω,V_σ,V_μ,β_ω,β_σ,β_μ} refer to the learnable parameters. We useandtransformations to impose the sum-to-one and positive constraints on the distribution parameters accordingly. Fig. <ref>(b) presents the GNN architecture used to construct the parameters of the log-normal mixtures in the decoder part.We assume the inter-event time τ_i^u is conditionally independent of the past events, given the model parameters. Hence, the distribution of the inter-event time under the decoder, factorizes asp_θ(τ|𝐳) = ∏_u=1^U∏_i=1^n^u p(τ_i^u|θ_i^u). The decoder part of the VAE framework is illustrated in Fig. <ref> (b). Hence, we can naturally make the next event time prediction usingt̂^u_i+1 = t^u_i + ∫_0^∞τ p(τ_i+1^u|θ_i+1^u)dτ. Training. We next explain how to learn the parameters of the VAE framework for dynamic graph-structured neural point processes.The event sequences 𝒮 are passed through the GNN in the encoder, to obtain the relation embedding 𝐡^t_i_(v,u),emb for all the timestamps {t_i}_i=1^L and each pair of two event types (v,u).We then concatenate all the relation embeddings, and transform them into a relation state 𝐡^k_(v,u),emb for each sub-interval k. The relational states {𝐡^k_(v,u),emb} are fed into the forward and backward RNNs to compute the prior distribution p_ϕ(𝐳|𝒮) and posterior distribution q_ϕ(𝐳|𝒮). We then sample {𝐳_(v,u)^k} from the concrete reparameterizable approximation of the posterior distribution.The hidden states {ĥ_v^t_i} evolve through a GRNN, in which the messages can only pass through the non-zero edges hinted by {𝐳_(v,u)^k}. These hidden states {ĥ_v^t_i} are used to parameterize the log-normal mixture distribution for the inter-event times. To learn the model parameters, we calculate the evidence lower bound (ELBO) asℒ^ELBO(ϕ,θ)=𝖤_q_ϕ(𝐳^k|𝒮)[∑_u=1^U∑_i=1^n^ulog p(τ_i^u|θ_i^u)] - ∑_k=1^K 𝒟_KL[q_ϕ(𝐳^k|𝒮) || p_ϕ(𝐳^k|𝐳^1:k-1,𝐬^1:k)].As we draw samples {𝐳_(v,u)^k} using an reparameterizable approximation, we can calculate gradients using backpropagation and optimize the ELBO. Hereafter, we denote the proposed model as variational autoencoder temporal point process (VAETPP). § RELATED WORK<cit.> proposed to capture the non-stationary network dynamics using continuous-time Markov chains. Graph-Structured Temporal Point Processes. <cit.> proposed a proximal graphical event model to infer the relationships among event types, and thus assumes that the occurrence of an event only depends on the occurring of its parents shortly before.<cit.> developed a geometric Hawkes process to capture correlations among multiple point processes using graph convolutional neural networks although the inferred graph is undirected.<cit.> developed a graph-structured transformer Hawkes process between multiple point processes. It assumes that each point process is associated with a vertex of a static graph, and model the dependency among those point processes by incorporating that graph into the attention module design. <cit.> formulated a graph-structured neural temporal point process (NTPP) by sampling a latent graph using a generator. The model parameters of the graph-structured NTPP and its associated graph, can be simultaneously optimized using an efficient bi-level programming. <cit.> recently also developed a variational framework for graph-structured neural point processes by generating a latent graph using intra-type history embeddings. The latent graph is then used to govern the message passing between intra-type embedding to decode the type-wise conditional intensity function. <cit.> proposed to learn a static conditional graphbetween event types, by incorporating prior knowledge.Additionally, <cit.> proposed to learn the latent graph structure among events via a probabilistic model. <cit.> tries to model event propagation using a graph-biased temporal point process, with a graph specified by the following relationships among individuals on social media.<cit.> studied a variational auto-encoder framework for modeling event sequences but haven't consider capturing latent graphs among dimensions.Dynamic Latent Graphs behind Time Series.<cit.> developed an variational autoencoder to learn a latent static graph among the entities that interact in a physical dynamical system. Following this success, <cit.> used a sequential latent variable model to infer dynamic latent graphs among the entities from the discrete-time observations. Although <cit.> and the proposed method both aim to learn dynamic graphs behind multivariate sequential observations, the main differences lie in that our input is asynchronous event sequences, for which we need to determine the regularly-spaced sub-intervals, and simultaneously learn a dynamic graph for each sub-interval. An interesting yet challenging direction, is to automatically infer the regularly-spaced time intervals and the corresponding dynamic graph structure, which we leave to future research. § EXPERIMENTSThe proposed variational autoencoder temporal point process (VAETPP), is evaluated on the task of event time and type prediction. We used fourreal-world data to demonstrate the proposed method, compared with existing related methods:New York Motor Vehicle Collisions(NYMVC).This data contains a collection of vehicle collision events occurring at New York city since April, 2014. Each crash event (t_i,v_i) records a motor vehicle collision occurring in district v_i, at time t_i. Specifically, during peak periods, a vehicle collision may give rise to a sequence of collisions in the same or nearby districts, in short time. Hence, it is well-fitted to model and predict the occurrences of these events using multivariate point processes. In addition, the influence relations among the districts may change over time as the aforementioned triggering effects become weaker during night time, compared to day time. We created each event sequence using the motor vehicle collision records between 8:00 and 23:00, and treated each three hours as a sub-interval. We considered the five districts, Manhattan, Brooklyn, Bronx, Queens and Staten Island as the event types.Stack Exchange Data.The three stack exchange data from different sources, are included in the experiments: MathOF, AskUbuntu, and SuperUser. The stack exchange data consists of various interactions among the participant. Each event (v_i,u_i,t_i) means that at timestamp t_i, user v_i may post an answer or comment to u_i's questions or comments. These interaction events among users usually exhibit a certain amount of clustering effects and periodic trends. For instance, some questions about popular technology, may quickly lead to a lot of answers or comments from the others who share similar interests. Additionally, these triggering effects demonstrate periodic trends: users are more inclined to response to technical topics during working days, than weekends/holidays. We consider the user who makes the action toward the other as the event type.Hence, we derived each sequence from events occurring within one week, and consider each day as a sub-interval. The details of the datasets can be found in Table <ref>. Experimental Setup.We compared the ability of the models in predicting the inter-event times τ_i^u using the historical events ℋ_t_i^u, for each event type u∈ [1,…,U], as illustrated in Fig. <ref>(b). Each real-world data is split into multiple event sequences.For each real-world data, we choose the 60% of the sequences for training, 20% for validation, and 20% for testing. For training, we maximize the ELBO in Eq. <ref> for the proposed model, and the expected log-likelihood for the other models.With the learned parameters, we measure the predictive performance of each model, using its obtained negative log-likelihood (NLL) on the validation set.Hence, the model configuration that achieves best predictive performance, can be chose using the validation set. Finally, the NLL loss on the test set, are used to compare the ability of the models in predicting inter-event times. We report the results averaged over ten random training/validation/testing splits.For the developed VAETPP, the dimension of the input embedding 𝐲_t_i^u is 64. For the fully-connected GNN of the encoder, f^1_emb, f_e^1,f_v^1, and f_e^1 are two-layer MLPs with 64 units for each layer and Exponential Linear Unit (ELU) activations. We use f_emb^2 to transform the concatenated hidden states within each sub-interval into one hidden state, and thus parameterize f_emb^2 using a one-layer MLP with 64 hidden units and ReLU activations.Both the forward RNN and backward RNN have 64 hidden units. We parameterize f_prior and f_enc by a one-layer MLP with 64 hidden units and Rectified Linear Unit (ReLU) activations. We set the number of edge types of the dynamic graph among event types to be two, and specify the first edge type to indicate no dependency. For the decoder part, we parameterize f_e^1 using a separate two-layer MLP with 64 hidden/output units, for each of the two edge types. The GRU has 64 hidden units.We chose the number of mixing components used in the log-normal mixture distribution for the VAETPP, using the validation data. In the experiments, we used 16 mixing components in the experiments. We also consider restricting the VAETPP with a static latent graph, and denote it as VAETPP (static), to validate the importance of learning dynamic graphs in capturing period trends in event sequences.We compared the proposed method against the following baselines:Exponential. The conditional intensity function of the constant intensity model <cit.> is defined as λ^*(t_i)=exp(𝐯^T𝐡_i+𝐛), in which 𝐡_i denotes the event history embedding learned by a RNN,𝐯 and 𝐛 refer to the model parameters. The probability density function (PDF) of the constant intensity model is an exponential distribution, as given by p^*(τ)=γexp(-γ), where γ=exp(𝐯^T𝐡_i+𝐛). Recurrent Marked Temporal Point Processes(RMTPP) <cit.>. The method encodes past events into historical embeddings using a RNN, and models exponential-distributed conditional intensity. Fully Neural Networks (FullyNN) <cit.>. This method captures the cumulative distributions of inter-event times using a neural network.Log Normal Mixture (LogNormMix) <cit.>. The method encodes the event history into embedding vectors with a RNN, and decodes the inter-waiting time with a log-normal mixture distribution. Transformer Hawkes Process (THP) <cit.>. The method leverages the self-attention mechanism to capture long-term dependencies in observed event sequences.Negative Log-likelihood Comparison. Table <ref> compares the negative log-likelihood loss of all the methods in modeling inter-event times. As expected, the LogNormMix admits more flexibility compared with simple model using unimodal distributions (Gompertz/RMTPP, Exponential), and thus shows improved performance by large margins. Transformer Hawkes process (THP) can effectively learn long-range dependencies among events, and thus achieves lower NLL loss. The proposed VAETPP not only can capture the complicated inter-event time distribution using a log-normal mixture decoder. It can also improve the inter-event time prediction in further by effectively removing the non-contributing effects of irrelevant past events, via the dynamic dependency graph. Hence, the proposed VAETPP achieves the best NLL loss values consistently on all the datasets. Event Prediction Comparison.We also consider the tasks of event time and type prediction in the experiments. In particular, following <cit.>,we make the next event time prediction using a linear predictor as t̂_i+1^u = 𝐖^timeθ_i^u, where θ_i^u is the history embedding updated by the VAETPP after observing of i-th event of u-th type, and𝐖^time∈ℝ^1× D denotes the event time predictor's parameter. The next event type prediction is 𝐩̂_i+1 =(𝐖^typeθ_i^u),m̂_i+1^u = max_j𝐩̂_i+1(j), where 𝐖^type∈ℝ^J× D denotes the event type predictor's parameter, and 𝐩̂_i+1(j) refers to the j-th entry of 𝐩̂_i+1∈ℝ^ J. The loss functions for event time and type prediction are defined as ℒ(𝒮;θ) =∑_u∑_i=1^n_u (t_i^u - t̂_i^u)^2, ℒ(𝒮;θ) =-∑_j=2^L 𝐦_j^Tlog(𝐩̂_j), respectively, where 𝐦_j is the one-hot encoding for the type of j-th event.To learn the parameters of the event time and type predictor, we consider minimizing a composite loss function asmin_ϕ,θ -ℒ^ELBO(𝒮;ϕ,θ) + ℒ(𝒮;θ) + ℒ(𝒮;θ), where ℒ^ELBO(𝒮;ϕ,θ) is the evidence lower bound derived in Eq.<ref>. We used the training data to learn the model parameters, and choose the best configuration according to the predictive performance on the validation set. Finally, we evaluated model performance on the test set. Specifically, we predicted each held-out event (t_j,m_j) given its history. We evaluated event type prediction by accuracy and event time prediction by Root Mean Square Error (RMSE). Tab. <ref> and <ref> shows the results for event time and type prediction, respectively. Our VAETPP outperforms the baselines in predicting event time and types on all the data. Model Interpretability. Fig. <ref>(a) shows the relative location of the five boroughs of New York city: Manhattan, Brooklyn, Bronx, Queens, Staten Island. The latent dynamic graphs for the five time-intervals, are plotted in Fig. <ref>(b-f). From the results, we find that the influences between Manhattan, Brooklyn, Bronx and Queens, are much stronger, compared with influences between these four areas and Staten Island. These influences between all the areas, gradually become weaker during night time, compared against day time. The results not only demonstrate the high model interpretability, but also explains why VAETPP obtains better prediction accuracy by effectively removing non-contributing historical events' influences via its estimated dynamic graphs.§ CONCLUSIONWe have presented a novel variational auto-encoder for modeling asynchronous event sequences. To capture the periodic trends behind long sequences, we use regularly-spaced intervals to capture the different states behind the sequences, and assume stationary dynamics within each sub-interval. The dependency structure between event types, is captured using latent variables, which allow to be evolving over time, to capture the time-varying graphs. Hence, the proposed model can effectively remove the influences from the irrelevant past event types, and achieves better accuracy in predicting inter-event times and types, compared with other neural point processes. We plan to generalize the work to capture non-stationary network dynamics <cit.> in the future research. § ACKNOWLEDGMENTSThe work is partially funded by Shenzhen Science and Technology Program (JCYJ20210324120011032) and Shenzhen Institute of Artificial Intelligence and Robotics for Society. | http://arxiv.org/abs/2312.16083v1 | {
"authors": [
"Sikun Yang",
"Hongyuan Zha"
],
"categories": [
"cs.LG",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20231226151155",
"title": "Dynamic Latent Graph-Guided Neural Temporal Point Processes"
} |
Department of Physics, East China Normal University, Shanghai 200241, China Shanghai Institute of Spaceflight Control Technology, Shanghai 201109, China [Electronic address: ][email protected] Department of Physics, East China Normal University, Shanghai 200241, ChinaIn the generalized Born-Infeld electrodynamics discovered by Bandos, Lechner, Sorokin and Townsend, we study transverse electric waves propagating perpendicular to a constant magnetic field background in a parallel-plate waveguide. The directions of propagation and polarization of the waves are perpendicular to each other, and both of them are parallel to the perfectly conducting plates. Two specific configurations are studied, in which the background magnetic field is either normal to the plates or along the polarization direction. The dispersion relation, the velocity and the cutoff frequency of the lowest-order lowest-frequency mode are calculated in both configurations. This paves the way for a promising test of the generalized Born-Infeld electrodynamics. Transverse electric waves in Bandos-Lechner-Sorokin-Townsend nonlinear electrodynamics Towe Wang January 14, 2024======================================================================================§ INTRODUCTIONAs is well known, Maxwell electrodynamics is invariant under both conformal transformations and SO(2) electromagnetic duality transformations <cit.>. It is less known that Bialynicki-Birula electrodynamics <cit.>, which was first found as a strong-field limit of Born-Infeld electrodynamics <cit.>, is invariant under both conformal transformations and SL(2,R) electromagnetic duality transformations <cit.>. Until three years ago, it was unknown that there is a nonlinear duality-invariant conformal extension of Maxwell electrodynamics, dubbed then as Modified Maxwell (ModMax) electrodynamics <cit.>.ModMax electrodynamics was discovered by Bandos, Lechner, Sorokin and Townsend as a weak-field limit of generalized Born-Infeld electrodynamics in Ref. <cit.>, where they also proved that the only conformal and duality invariant electrodynamics theories are Bialynicki-Birula electrodynamics and ModMax electrodynamics. From then on, considerable efforts have been made to study ModMax electrodynamics, but less attention has been paid to the generalized Born-Infeld electrodynamics, to which we will refer as Bandos-Lechner-Sorokin-Townsend (BLST or BLeST) electrodynamics. As will be explained below, BLST electrodynamics is a remarkable theory unifying Maxwell, Born-Infeld, Bialynicki-Birula and ModMax theories.Like Born-Infeld electrodynamics, BLST electrodynamics preserves the SO(2) electromagnetic duality invariance but breaks the conformal invariance. Its Lagrangian density <cit.> can be written as_BLST=T-√(T^2-2T_ModMax-1/16(F_μν^μν)^2)in terms of the electromagnetic field strength F_μν and its dual ^μν, where the Lagrangian density of ModMax electrodynamics_ModMax=-coshγ/4F_μνF^μν+sinhγ/4√((F_μνF^μν)^2+(F_μν^μν)^2).Here T and γ are constant parameters, and they will be referred to as Born-Infeld constant and ModMax constant, respectively. It is easy to see BLST Lagrangian reduces to Born-Infeld Lagrangian in the special case γ=0, and to ModMax Lagrangian in the weak-field limit T→∞. In addition, ModMax Lagrangian reduces to Maxwell Lagrangian in the limit γ→0. Similar to Born-Infeld theory, the strong-field limit T→0 of BLST theory is Bialynicki-Birula electrodynamics, a fact that should be manifest in the Hamiltonian formulation <cit.>.In this article, we are interested in a promising test of BLST electrodynamics with transverse electric (TE) waves propagating perpendicular to a constant magnetic field background in a parallel-plate waveguide, enlightened by similar ideas on Born-Infeld electrodynamics <cit.>. By definition, the directions of propagation and electric field of a TE wave are perpendicular to each other, and both of them are parallel to the perfectly conducting plates. Therefore, it is convenient to choose a Cartesian coordinate system where the TE wave is propagating in the z-axis and polarized along the y-axis, and the plates are normal to the x-axis. For concreteness, the two plates are located at x=0 and x=L. Based on BLST electrodynamics, we will work out the dispersion relation of the lowest-order lowest-frequency TE mode (TE_1 mode) in two specific configurations. In the first configuration, the background magnetic field is in the x-direction, for which our calculation will be carried out in Sec. <ref>. In the second configuration, the background magnetic field is along the y-direction, of which the details will be given in Sec. <ref>. Sec. <ref> presents concluding remarks. In both configurations, the same scheme <cit.> will be employed for calculations. We first study a standing-wave solution of BLST electrodynamics between two parallel conducting plates in the presence of constant external electric and magnetic fields. The external electric field _0 is perpendicular to both the z-axis and the external magnetic field _0. In a convenient gauge <cit.>, the standing waves are oscillating electric and magnetic fields along y and z axes, respectively. Performing a suitable Lorentz boost in the z-direction, we can then transform the constant external electric and magnetic fields into a constant magnetic field background '_0 parallel to _0. Meanwhile, the standing waves are transformed into propagating TE waves <cit.>.We work in natural units with c=ħ=1 and adopt the metric convention (+,-,-,-). Following Ref. <cit.>, we use sign conventions for field components as defined in Ref. <cit.>, Section 11.9 and denote derivatives of u as u_t=∂_t u, u_x=∂_x u, etc. Following Refs. <cit.>, we assume T>0 and γ≥0 throughout this article.§ TE WAVES POLARIZED PERPENDICULAR TO A CONSTANT MAGNETIC FIELD BACKGROUND §.§ Standing-wave solutionIn this section, we study TE waves polarized perpendicular to a constant transverse magnetic field in a parallel-plate waveguide. For this purpose, we start with an external magnetic field in the x-direction, in addition to which there is an external electric field and an oscillating electric field along the y-direction. The waveguide is taken as two perfectly conducting plates parallel to the yz-plane, located at x=0 and x=L. For this configuration, the simplest solution for the electromagnetic field in a convenient gauge <cit.> has the formA_y(x,y,z,t)=-E_0t-B_0z+u(x,t), Φ=A_x=A_z=0with u to be determined by field equations. We then have=-∂_t -∇Φ=(0,E_0-u_t,0), =∇×=(B_0,0,u_x), and in our sign conventions,1/4F_μνF^μν=1/2(||^2-||^2)=1/2[B_0^2+u_x^2-(E_0-u_t)^2], 1/4F_μν^μν=-·=0. It is noteworthy that, as long as F_μν^μν=0, the Lagrangian density (<ref>) reduces to= T-√(T^2+T/2e^-σγF_μνF^μν)= e^-σγ(-√(^2+/2F_μνF^μν)),where we introduced the notation =e^σγT withσ={[ 1, if F_μνF^μν≥0;;-1, if F_μνF^μν<0. ].This result shows the BLST Lagrangian has the same form as the Born-Infeld Lagrangian when ·=0, with the replacement T→ e^σγT.Inserting Eq. (<ref>) into Eq. (<ref>) leads to the Lagrangian density=e^-σγ{-√(^2+[B_0^2+u_x^2-(E_0-u_t)^2])}.The corresponding Euler-Lagrange equation isu_xx-u_tt+1/{[B_0^2-(E_0-u_t)^2]u_xx-(B_0^2+u_x^2)u_tt-2(E_0-u_t)u_x u_xt}=0.In the largelimit, this equation can be solved iteratively using the Poincaré-Lindstedt method. The method has been reviewed in Ref. <cit.>. Imposing the boundary conditions u(0,t)=u(L,t)=0 and introducing the orthogonal functions s_nm(x,t) = sin(nkx+mω t)+sin(nkx-mω t)=2sin(nkx)cos(mω t), c_nm(x,t) = cos(nkx+mω t)-cos(nkx-mω t)=2sin(nkx)sin(mω t)with k=π/L, one can formally write down the solution order by order asu = u^(0)+u^(1)+⋯+u^(N)+𝒪(^-(N+1)), ω^2 = ω^2_(0)+ω^2_(1)+⋯+ω^(N)+𝒪(^-(N+1)),where u^(i)(x,t) (i≥0) are expected to be linear combinations of the functions s_nm(x,t) with coefficients of order ^-i. In accordance with Ref. <cit.>, we assume there are no s_nn(x,t) modes in all high order terms u^(i)(x,t) (i≥1). The lowest-order solution of Eq. (<ref>) isu^(0)(x,t)=Asin(kx)cos(ω t)=A/2s_11(x,t), ω^2_(0)=k^2which serves as the seed solution. This guarantees that the standard results of Maxwell electrodynamics can be reproduced in the limit →∞. Substituting u=u^(0)+u^(1), ω^2=ω^2_(0)+ω^2_(1) into Eq. (<ref>) and keeping the 𝒪(1/) terms, one obtainsu^(1)_xx-u^(1)_tt+A/2ω^2_(1)s_11+1/[A/2k^2(E_0^2+A^2k^2/2)s_11+A^3/8k^4(s_31-s_13)+A^2k^3E_0sin(2ω t)]+𝒪(^-2)=0.To meet this equation, the 𝒪(1/) terms should cancel out. It therefore follows thatu^(1)(x,t)=A^3/64k^2(s_31+s_13)-A^2/4kE_0sin(2ω t), ω^2_(1)=-k^2/(E_0^2+A^2k^2/2).Altogether, we have the 𝒪(1/) solution of Eq. (<ref>),u = A/2s_11+A^3/64k^2(s_31+s_13)-A^2/4kE_0sin(2ω t)+𝒪(^-2), ω^2 = k^2-k^2/(E_0^2+A^2k^2/2)+𝒪(^-2).This procedure can be continued to higher orders as demonstrated in Ref. <cit.>.Here are some remarks about the boundary conditions u(0,t)=u(L,t)=0. First, these conditions are equivalent to the statement that any nonsingular solution for u(x,t) (0≤ x≤ L) can be expanded in orthogonal functions s_nm(x,t) and c_nm(x,t) (n≥1). Second, although the boundary conditions are satisfied by u^(0)(x,t), they are violated by the term sin(2ω t) in u^(1)(x,t). This term is suppressed by 1/ and can be interpreted as a time-periodic uniform electric field. It is in agreement with the fact that the boundary conditions are violated by the last term in Euler-Lagrange equation (<ref>), which is also suppressed by 1/. One can check this by inserting u(x,t)∝sin(nkx) into Eq. (<ref>). Third, in practice, electric fields induced by the waveguide walls will partially cancel the external electric field _0 in the neighborhoods of the surfaces, therefore Eqs. (<ref>) and (<ref>) are invalid at x/L≪1 or (L-x)/L≪1. Far away from the boundaries, these equations are nevertheless valid. Since we will be mainly interested in the dispersion relation of the TE_1 mode in the waveguide, we can safely disregard the mode of frequency 2ω in our analysis. §.§ Propagating TE_1 modeIn the previous subsection, for a configuration with orthogonal electric and magnetic background fields, we have explored the standing-wave solution of BLST electrodynamics. Since we are interested in TE waves polarized perpendicular to a constant transverse magnetic field in a parallel-plate waveguide, let us apply a Lorentz transformation along the z-axis to the standing-wave solution.The Lorentz boost along the z-axis can be realized by a coordinate transformation as follows:x→ x', y→ y', z→z'-Vt'/√(1-V^2), t→t'-Vz'/√(1-V^2).Here (t,x,y,z) and (t',x',y',z') can be regarded as coordinate systems attached to two reference frames S and S', respectively. The reference frame S is the one in which we were working in the previous subsection. The reference frame S' is moving with velocity V relative to S in the negative direction of z-axis. By such a Lorentz boost, the electromagnetic field (<ref>) is transformed intoA_y'=-E_0-VB_0/√(1-V^2)t'-B_0-VE_0/√(1-V^2)z'+u(x',t'-Vz'/√(1-V^2)), Φ=A_x'=A_z'=0.Remember that we are interested in TE waves polarized perpendicular to a constant external magnetic field. In order to get rid of an external electric field in the new frame S', we preset E_0=VB_0 <cit.> and thus have'=(0,-u_t',0), '=(B'_0-u_z',0,u_x'),where the external magnetic field is B'_0=B_0√(1-V^2) in the x'-direction.In the reference frame S', Eqs. (<ref>) and (<ref>) take the formu(x',t'-Vz'/√(1-V^2)) = Asin(k_x')cos(ω't'-k_z')-A^2/4k_B'_0sin(2ω't'-2k_z') +A^3/32k_^2[sin(3k_x')cos(ω't'-k_z')+sin(k_x')cos(3ω't'-3k_z')]+𝒪(^-2), ω'^2-k_^2 = k_^2-1/(k_^2B'^2_0+A^2k_^4/2)+𝒪(^-2)with the wave four-vector (ω',k_,0,k_) dictated byω'=ω/√(1-V^2), k_=π/L, k_=ω'V.Clearly, by the Lorentz boost, the solution (<ref>) is turned into propagating waves along the z-axis. Especially, the s_11 mode of standing wave is turned into the TE_1 mode of propagating wave, while the time-periodic uniform mode is transformed into a uniform plane wave of frequency 2ω' which can be identified as the TE_0 mode. By contrast, in Maxwell electrodynamics, the lowest-order mode of the transverse magnetic field is the TM_0 mode, whereas the lowest-order TE mode is TE_1 in a parallel-plate waveguide. Because we started with the s_11 mode as the seed solution in Sec. <ref>, here the obtained dominant mode is TE_1 mode in Eq. (<ref>). As can be derived from the dispersion relation (<ref>), its phase velocity v_p=ω'/k_ and group velocity v_g=dω'/dk_ arev_p=1/V, v_g=(1-B'^2_0/)V+𝒪(^-2),where V=k_/ω' is V^2=(1-B'^2_0/)^-1[1-k_^2/ω'^2(1-A^2k_^2/2)]+𝒪(^-2).This expression is consistent with Ref. <cit.>, Equation (54) in the limit L→∞, and with Ref. <cit.>, Equation (14) after turning off the external field '_0.The above are our results for the TE_1 wave polarized perpendicular to a constant transverse magnetic field in a parallel-plate waveguide in BLST electrodynamics. Eq. (<ref>) shows that the cutoff frequency of TE_1 modeω'_1≃ k_(1-A^2k_^2/4)is independent of the external field '_0 but deceases with the amplitude of TE_1 wave A. The amplitude of the TE_0 mode is ^-1Ak_B'_0/4 times the amplitude of the TE_1 mode. As implied by Eqs. (<ref>) and (<ref>), the phase velocity decreases with the strength of external magnetic field and the amplitude of TE_1 wave, while the group velocity decreases with the strength of external magnetic field but increases with the amplitude of TE_1 wave.When deriving Eq. (<ref>), we have takento be constant implicitly. This is true at least if min(F_μνF^μν)≥0, or B_0^2-(E_0+Aω)^2≥0 for the s_11 mode. In the reference frame S', it amounts to B'_0/A≥ω'+√(ω'^2-k_^2) for the TE_1 mode, which can be satisfied by low-frequency waves. Once this condition is satisfied, we can set σ=1 and thus =e^γT.§.§ CheckWe are not sure if the mode sin(2ω't'-2k_z') is physical. Let us confirm it by directly solving the field equations in the frame S'.A_y'(x',y',z',t')=-B'_0z'+u(x',z',t'), Φ=A_x'=A_z'=0with u to be determined by field equations and boundary conditions u(0,z',t')=u(L,z',t')=0. We then have=-∂_t -∇Φ=(0,-u_t',0), =∇×=(B'_0-u_z',0,u_x'),and in our sign conventions,1/4F_μνF^μν=1/2(||^2-||^2)=1/2[(B'_0-u_z')^2+u_x'^2-u_t'^2], 1/4F_μν^μν=-·=0.Substitution into Eq. (<ref>) leads to the Lagrangian density=e^-σγ{-√(^2+[(B'_0-u_z')^2+u_x'^2-u_t'^2])}.The corresponding Euler-Lagrange equation isu_x'x'+u_z'z'-u_t't'+1/{[(B'_0-u_z')^2-u_t'^2]u_x'x'+(u_x'^2-u_t'^2)u_z'z'-[(B'_0-u_z')^2+u_x'^2]u_t't'+2(B'_0-u_z')u_x'u_x'z'+2u_t' u_x'u_t'x'-2(B'_0-u_z')u_t'u_t'z'} = 0.It is straightforward to confirm that Eqs. (<ref>) and (<ref>) satisfy this equation.=u_x'x' +u_z'z' -u_t't' +1/{ + ((B'_0 - u_z')^2 - u_t'^2 ) u_x'x' - ((B'_0 - u_z')^2 + u_x'^2 ) u_t't' + (u_x'^2 - u_t'^2) u_z'z'. . +2 (B'_0 - u_z') u_x' u_x'z' -2 (B'_0 - u_z') u_t' u_t'z' +2 u_t' u_x' u_t'x'} = 0Please check that Eqs. (<ref>) and (<ref>) satisfy this equation. § TE WAVES POLARIZED PARALLEL TO A CONSTANT MAGNETIC FIELD BACKGROUND §.§ Standing-wave solution In this section, we explore TE waves polarized parallel to a constant transverse magnetic field in a parallel-plate waveguide. Let us consider two perfectly conducting plates parallel to the yz-plane, located at x=0 and x=L again. But this time the external electric and magnetic fields are parallel and perpendicular to the x-axis, respectively. Their directions are exchanged in comparison with the previous section. Since we are interested in an oscillating electric field along the external magnetic field, the simplest solution for the electromagnetic field in a convenient gauge <cit.> is of the formA_x(x,y,z,t)=-E_0t+B_0z, A_y(x,y,z,t)=u(x,t), Φ=A_z=0with u to be determined by field equations. We then have =(E_0,-u_t,0), =(0,B_0,u_x), 1/4F_μνF^μν=1/2(B_0^2-E_0^2+u_x^2-u_t^2), 1/4F_μν^μν=B_0u_t.Substitution into Eq. (<ref>) shows that =T-√(U)with the shorthand notationU=T^2+Tcoshγ(B_0^2-E_0^2+u_x^2-u_t^2)-Tsinhγ√((B_0^2-E_0^2+u_x^2-u_t^2)^2+4B_0^2u_t^2)-B_0^2u_t^2. Correspondingly, the Euler-Lagrange equation is∂/∂ u-∂/∂ x∂/∂ u_x-∂/∂ t∂/∂ u_t=0in which∂/∂ u_x=-u_x/√(U)[Tcoshγ-Tsinhγ(B_0^2-E_0^2+u_x^2-u_t^2)/√((B_0^2-E_0^2+u_x^2-u_t^2)^2+4B_0^2u_t^2)], ∂/∂ u_t=u_t/√(U)[Tcoshγ-Tsinhγ(-B_0^2-E_0^2+u_x^2-u_t^2)/√((B_0^2-E_0^2+u_x^2-u_t^2)^2+4B_0^2u_t^2)+B_0^2].Multiplied by U√(U)/T^3, Eq. (<ref>) can be arranged in the formU√(U)/T^3(-∂/∂ x∂/∂ u_x-∂/∂ t∂/∂ u_t)=C_1u_xx-C_2u_tt+2C_3u_xu_tu_xt=0with C_1 = (coshγ-sinhγcosθ-2u_x^2sinhγsin^2θ/√(Ξ))(1-B_0^2u_t^2/T^2)+(B_0^2-E_0^2-u_t^2)(coshγ-sinhγcosθ)^2/T +2B_0u_tsinhγsinθ(coshγ-sinhγcosθ)/T-2u_x^2sinhγsin^2θ(coshγcosθ-sinhγ)/T= 1-γcosθ-2γ u_x^2sin^2θ/√(Ξ)+B_0^2-E_0^2-u_t^2/T+𝒪(γ^2,T^-2,γ T^-1), C_2 = (coshγ-sinhγcosθ)[1+B_0^2(B_0^2-E_0^2+u_x^2)/T^2]+(2B_0^2-E_0^2+u_x^2)(coshγ-sinhγcosθ)^2/T +4B_0^2sinhγcosθ(coshγ-sinhγcosθ)/T+2(u_x^2-E_0^2)sinhγsin^2θ(coshγcosθ-sinhγ)/T -2B_0^2sinh^2γsin^2θ/T+2sinhγ[B_0^2+(u_x^2-E_0^2)sin^2θ]/√(Ξ)(1-B_0^2u_t^2/T^2)= 1-γcosθ+2B_0^2-E_0^2+u_x^2/T+2γ[B_0^2+(u_x^2-E_0^2)sin^2θ]/√(Ξ)+𝒪(γ^2,T^-2,γ T^-1),C_3 = B_0^2(coshγ-sinhγcosθ)/T^2+cosh^2γ-2sinhγcoshγcos^3θ+sinh^2γ(3cos^2θ-2)/T +2B_0^2sinhγ[coshγ(2cos^2θ+1)-3sinhγcosθ]/T√(Ξ)+2sinhγ(2B_0^2cosθ/Ξ+sin^2θ/√(Ξ))(1-B_0^2u_t^2/T^2)= 1/T+2γ(2B_0^2cosθ/Ξ+sin^2θ/√(Ξ))+𝒪(γ^2,T^-2,γ T^-1)andcosθ = B_0^2-E_0^2+u_x^2-u_t^2/√(Ξ), sinθ=-2B_0u_t/√(Ξ), Ξ = (B_0^2-E_0^2+u_x^2-u_t^2)^2+4B_0^2u_t^2.C_1= + (coshγ-sinhγcosθ ) (-Ξsin ^2θ +4 T^2+4 √(Ξ) T (coshγcosθ - sinhγ)) /4 T^2 -u_x^2 ( 2 sinhγsin ^2θ(1-Ξsin ^2θ/4 T^2)/√(Ξ) +(coshγ-sinhγcosθ )^2/T +2 sinhγsin ^2θ(coshγcosθ -sinhγ)/T) = +coshγ-sinhγcosθ -2 /√(Ξ)sinhγsin ^2θ u_x^2 +1/2T[ 2√(Ξ) (coshγ - sinhγcosθ) (coshγcosθ-sinhγ) +( (1-3 cos 2θ )sinh ^2γ+4 sinhγcoshγcos ^3θ-2 cosh ^2γ) u_x^2 ] √(Ξ)sin ^2θ/4T^2[ - √(Ξ) (coshγ-sinhγcosθ) + 2sinhγsin ^2θ u_x^2 ] C_2= C_3= 4 B_0^2/Ξsinhγcosθ +2 sinhγsin ^2θ/√(Ξ) +cosh ^2γ -2 sinhγcoshγcos ^3θ +sinh ^2γ(cos ^2θ -2 sin ^2θ) /T +2 B_0^2 sinhγ(2 coshγcos ^2θ -3 sinhγcosθ +coshγ)/T√(Ξ) +B_0^2 (coshγ-sinhγ(sin ^2θ +1) cosθ)-1/2√(Ξ)sinhγsin ^4θ/T^2When u_t=0, we find √(Ξ)=B_0^2-E_0^2+u_x^2 andC_1 = (coshγ-sinhγ)+(B_0^2-E_0^2)(coshγ-sinhγ)^2/T,C_2 = (coshγ-sinhγ)(1+B_0^2√(Ξ)/T^2)+(B_0^2+√(Ξ))(coshγ-sinhγ)^2/T +4B_0^2sinhγ(coshγ-sinhγ)/T+2B_0^2sinhγ/√(Ξ), We assume that natural boundary conditions remain for we consider the boundary of waveguide is perfect conductor, Todo which means that the electromagnetic field must vanish as the boundary x=0 and x=L. We neglect γ and 1/T terms and wave equation left, then consider the boundary conditions additionally, select the seed solution We consider the boundary of the waveguide to be a perfect conductor, which implies natural boundary conditions. According to Ohm's law, any finite electromagnetic field would lead to infinite current density in a perfect conductor, which requires that the electromagnetic field must vanish in the perfect conductor, especially at the boundary x=0 and x=L. Neglecting γ and 1/T terms, we obtain the wave equation. Adding the boundary conditions, we select its solution as the seed solutionu^(0) = A sin kx cosω t = A/2 s_11(t, x),where we give the notations_nm = sin(nkx + mω t) + sin(nkx - mω t).Substitute the seed solution u = u^(0) into equation(<ref>), and express the result obtained by s_nmA/4 s_11[ 2 ω ^2 (B_0^2/T+1) (B_0^2 - E_0^2/T+1) +k^2 ( A^2 ω ^2 B_0^2/T^2 -2B_0^2-E_0^2/T + A^2 ω ^2/T-2 ) ]+A^3 k^2 ω ^2/8T(B_0^2/T+1) (s_31 - s_13) +s_11∫_-π/ω^π/ω∫_-π/k^π/k f(γ,...)sin(kx)cos(ω t)/π^2/kωx̣ṭ + others We are only interested in the first-order corrections of γ and 1/T to the seed solution. To obtain the first-order correction of the solution, we need to ignore the higher-order terms of γ and 1/T, and let the coefficient of s_11 in the above formula vanish to get the correction of the seed solution. The field equation (<ref>) is too complicated to solve, even approximately with the Poincaré-Lindstedt method <cit.>. However, it is not hard to see that the boundary conditions u(0,t)=u(L,t)=0 are compatible with Eq. (<ref>) by decomposing its standing-wave solution u(x,t) (0≤ x≤ L) into series of s_nm(x,t), c_nm(x,t) (n≥1) defined by Eqs. (<ref>), (<ref>). Formally the solution of Eq. (<ref>) can be arranged order by order asu = u^(0,0)+u^(1,0)+u^(0,1)+u^(2,0)+u^(1,1)+u^(0,2)+⋯, ω^2 = ω^2_(0,0)+ω^2_(1,0)+ω^2_(0,1)+ω^2_(2,0)+ω^2_(1,1)+ω^2_(0,2)+⋯,where ω^2_(i,j) (i,j≥0) is of order γ^iT^-j. Under the boundary conditions u(0,t)=u(L,t)=0, each of u^(i,j)(x,t) (i,j≥0) is a linear combination of functions s_nm(x,t), c_nm(x,t) (n≥1) with coefficients of order γ^iT^-j, and it is reasonable to assume that the s_11(x,t) mode does not appear in u^(i,j)(x,t) (i+j≥1). We will not work out u^(i,j)(x,t) (i+j≥1), because they are more complicated but less interesting than the dispersion relation of the TE_1 mode. The 𝒪(γ,1/T) dispersion relation can be derived by inserting the seed solutionu^(0,0)(x,t)=Asin(kx)cos(ω t)=A/2s_11(x,t), ω^2_(0,0)=k^2into Eq. (<ref>) multiplied by Ξ√(Ξ) and then extracting the coefficient of s_11,ω^2-k^2-ω^2_(1,0)+k^2/T(B_0^2+A^2k^2/2)+𝒪(γ^2,T^-2,γ T^-1),in which ω^2_(1,0) = -γ/L^2∫_-L^L∫_-L^L2B_0^2k^2sin^2(kx)cos^2(kt)/{[A^2k^2cos^2(kt)-A^2k^2sin^2(kx)+B_0^2-E_0^2]^2+4A^2k^2B_0^2sin^2(kx)sin^2(kt)}^3/2 ×{A^2k^2[1-sin^2(kx)-cos^2(kt)][2B_0^2-2E_0^2-A^2k^2(1-3sin^2(kx)-3cos^2(kt)+4sin^2(kx)cos^2(kt))] .+(A^2k^2+B_0^2-E_0^2)^2}dxdt. ω^2_(1,0) = -γ/L^2∫_-L^L∫_-L^L2B_0^2k^2sin^2(kx)cos^2(kt)/Ξ_(0,0)^(0,0)3/2×{A^2k^2[1-sin^2(kx)-cos^2(kt)][2B_0^2-2E_0^2 .-A^2k^2(1-3sin^2(kx)-3cos^2(kt)+4sin^2(kx)cos^2(kt))]+(A^2k^2+B_0^2-E_0^2)^2}dxdt, Ξ_(0,0)^(0,0) = [A^2k^2cos^2(kt)-A^2k^2sin^2(kx)+B_0^2-E_0^2]^2+4A^2k^2B_0^2sin^2(kx)sin^2(kt).ω^2_(1,0) = -γ L^2B_0^2k^2[3A^4k^4+8A^2k^2(B_0^2-E_0^2)+8(B_0^2-E_0^2)^2]/4∫_-L^L∫_-L^Lsin^2(kx)cos^2(kt)Ξ_(0,0)^(0,0)3/2dxdt, Ξ_(0,0)^(0,0) = [A^2k^2cos^2(kt)-A^2k^2sin^2(kx)+B_0^2-E_0^2]^2+4A^2k^2B_0^2sin^2(kx)sin^2(kt). Strictly speaking, we ought to put u=u^(0,0)+u^(1,0)+u^(0,1), ω^2=ω^2_(0,0)+ω^2_(1,0)+ω^2_(0,1) into Eq. (<ref>) to extract the coefficient of s_11. But u^(1,0) and u^(0,1) do not contain s_11 by assumption, so they have no influence on 𝒪(γ,1/T) terms in Eq. (<ref>). Particularly, we haveΞ√(Ξ)(u_xx-u_tt)=Ξ^(0,0)3/2_(0,0)(ω^2-k^2)u^(0,0)+𝒪(γ^2,T^-2,γ T^-1) as long as ω^2_(0,0)=k^2. To satisfy Eq. (<ref>), the coefficient of s_11 should be equal to zero. As a result, the dispersion relation of the s_11 mode of standing wave isω^2=k^2+ω^2_(1,0)-k^2/T(B_0^2+A^2k^2/2)+𝒪(γ^2,T^-2,γ T^-1).§.§ Propagating TE_1 modeParallel to Sec. <ref>, we can do a Lorentz boost through the coordinate transformation (<ref>) to turn the standing-wave solution (<ref>) into propagating waves. Then the phase of the temporal factor changes to ω't'-k_z' but the phase of the transversal sector remains invariant. The frequency and the wave vector are given by Eq. (<ref>).Under the Lorentz boost (<ref>), the electromagnetic field (<ref>) is transformed intoA_x'=-E_0+VB_0/√(1-V^2)t'+B_0+VE_0/√(1-V^2)z', A_y'=u(x',t'-Vz'/√(1-V^2)), Φ=A_z'=0.In this section, we are interested in TE waves polarized parallel to a constant external magnetic field. For this reason, we set E_0=-VB_0 <cit.> to get rid of an external electric field in the new reference frame. Then we have'=(0,-u_t',0), '=(-u_z',B'_0,u_x'),where the external magnetic field is B'_0=B_0√(1-V^2) in the y'-direction as expected.Again the s_11 mode of standing wave is turned into the TE_1 mode of propagating wave, whose dispersion relation can be gained by reforming Eq. (<ref>) asω'^2-k_^2=k_^2+ω^2_(1,0)-1/T(ω'^2B'^2_0+A^2k_^4/2)+𝒪(γ^2,T^-2,γ T^-1),with the wave four-vector defined in Eq. (<ref>). From this dispersion relation, one can get the phase velocity and group velocity of the TE_1 modev_p=1/V, v_g=(1-dω^2_(1,0)/d(ω'^2)+B'^2_0/)^-1V+𝒪(γ^2,T^-2,γ T^-1),respectively, whereV^2=1-k_^2/ω'^2-ω^2_(1,0)/ω'^2+1/T(B'^2_0+A^2k_^4/2ω'^2)+𝒪(γ^2,T^-2,γ T^-1).In the above, ω^2_(1,0) should be understood as a function of B'_0, ω' and k_. It can be derived from Eq. (<ref>) as ω^2_(1,0) = -γ/π^2∫_-π^π∫_-π^π2ω'^2b^2sin^2ξcos^2τ/[(cos^2τ-sin^2ξ+b^2)^2+4ω'^2k_^-2b^2sin^2ξsin^2τ]^3/2 ×{(1-sin^2ξ-cos^2τ)[2b^2-(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)]+(1+b^2)^2}dξ dτ= γ/π^2∫_0^π∫_0^π{(1-sin^2ξ-cos^2τ)[2b^2-(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)]+(1+b^2)^2} ×2ω'^2b^2sinξcosτ/cosξsinτ[(cos^2τ-sin^2ξ+b^2)^2+4ω'^2k_^-2b^2sin^2ξsin^2τ]^3/2dsin^2ξ dcos^2τ= -γ/π^2∫_0^1∫_0^18ω'^2b^2(αβ)^1/2{(1-α-β)[2b^2-(1-3α-3β+4αβ)]+(1+b^2)^2}/[(1-α)(1-β)]^1/2[(β-α+b^2)^2+4ω'^2k_^-2b^2α(1-β)]^3/2dα dβ.ω^2_(1,0)=-γπ^2ω'^2b^2(3+8b^2+8b^4)/4∫_-π^π∫_-π^πsin^2ξcos^2τ[(cos^2τ-sin^2ξ+b^2)^2+4ω'^2k_^-2b^2sin^2ξsin^2τ]^3/2dξ dτ.with the notation b=B'_0/(Ak_). Setting γ=0, Eq. (<ref>) is consistent with Ref. <cit.>, Equation (54) in the limit L→∞, and with Ref. <cit.>, Equation (14) after turning off the external field '_0. The above are our results for the TE_1 wave polarized parallel to a constant transverse magnetic field in a parallel-plate waveguide in BLST electrodynamics. Eq. (<ref>) means that the cutoff frequency of TE_1 mode isω'_1≃ k_(1+.ω^2_(1,0)/2ω'^2|_ω'=k_-2B'^2_0+A^2k_^2/4T).Influenced by the 𝒪(T^-1) correction, the cutoff frequency decreases with the strength of external magnetic field B'_0 and the amplitude of TE_1 wave A. Fixing ω'=k_, we plot -ω^2_(1,0)/(γω'^2) in the region 0≤ b^2≤5 in Fig. <ref>, which shows that the cutoff frequency decreases with the ratio of external field strength to wave amplitude B'_0/A as a result of the 𝒪(γ) correction. In Fig. <ref>, we depict -ω^2_(1,0)/(γω'^2) in the contour 0<k_^2ω'^-2≤1, 0≤ b^2≤5. The 𝒪(T^-1) correction in Eq. (<ref>) suggests that the phase velocity decreases with the strength of external magnetic field and the amplitude of TE_1 wave, while the 𝒪(γ) correction indicates that the phase velocity decreases with the ratio B'_0/A. Making use of Eqs. (<ref>), (<ref>), we can obtainv_g^2=1-k_^2/ω'^2+2(1-k_^2/ω'^2)dω^2_(1,0)/d(ω'^2)-ω^2_(1,0)/ω'^2+1/T[A^2k_^4/2ω'^2-B'^2_0(1-2k_^2/ω'^2)]+𝒪(γ^2,T^-2,γ T^-1).The 𝒪(T^-1) correction in this equation suggests that the group velocity increases with the amplitude of TE_1 wave, but its dependence on the strength of external magnetic field is sensitive to the value of 1-2k_^2/ω'^2.In Fig. <ref>, we illustrate -γ^-1dω^2_(1,0)/d(ω'^2) in the contour 0<k_^2ω'^-2≤1, 0≤ b^2≤5. Similar plots are displayed in Fig. <ref> for the 𝒪(γ) correction in v_g^2,v^2_g(1,0)=2(1-k_^2/ω'^2)dω^2_(1,0)/d(ω'^2)-ω^2_(1,0)/ω'^2.This correction does not have a monotonic dependence on the ratio B'_0/A. Near the cutoff frequency, the𝒪(γ) correction is approximately given by the last term in Eq. (<ref>). In the limit b→0, Eq. (<ref>) tends to-ω^2_(1,0)/γω'^2 ≃ 1/π^2∫_-π^π∫_-π^π4b^2sin^2ξcos^2τ[1-(1-sin^2ξ-cos^2τ)(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)]/(4ω'^2k_^-2b^2sin^2ξsin^2τ)^3/2exp[3(cos^2τ-sin^2ξ+b^2)^2/8ω'^2k_^-2b^2sin^2ξsin^2τ]dξ dτ≃ 1/π^2∫_-π^π4b^2sin^2ξcos^2τ[1-(1-sin^2ξ-cos^2τ)(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)]/(4ω'^2k_^-2b^2sin^2ξsin^2τ)^3/2 ×(2π b^2ω'^2k_^-2sin^2ξcos^2τ/3sin^2τcos^2τ)^1/2dτ≃ 1/π^2∫_-π^π4cos^4τ[1-(1-2cos^2τ)(1-6cos^2τ+4cos^4τ)]/(4ω'^2k_^-2sin^2τcos^2τ)^3/2(2πω'^2k_^-2cos^2τ/3sin^2τ)^1/2dτ≃ 1/π^2∫_-π^πcos^2τ[1-(1-2cos^2τ)(1-6cos^2τ+4cos^4τ)]/ω'^2k_^-2sin^4τ(2π/12)^1/2dτ≃ ? -ω^2_(1,0)/γω'^2 ≃ 1/π^2∫_-π^π∫_-π^π4b^2sin^2ξcos^2τ[1-(1-sin^2ξ-cos^2τ)(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)]/|cos^2τ-sin^2ξ|^3+3b^2|cos^2τ-sin^2ξ|(cos^2τ-sin^2ξ+2ω'^2k_^-2sin^2ξsin^2τ)dξ dτ≃ 2k_^2/3π^2ω'^2∫_-π^π∫_-π^π1-(1-sin^2ξ-cos^2τ)(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)/|cos^2τ-sin^2ξ|dξ dτ -ω^2_(1,0)/γω'^2 ≃ 1/π^2∫_-π^π4b^2sin^2ξcos^2τ[1-(1-sin^2ξ-cos^2τ)(1-3sin^2ξ-3cos^2τ+4sin^2ξcos^2τ)]/[(cos^2τ-sin^2ξ)^2+2b^2(cos^2τ-sin^2ξ+2ω'^2k_^-2sin^2ξsin^2τ)+b^4]^3/2dξ dτwhich is -ω^2_(1,0)/(γω'^2)≃37.6k_^2ω'^-2 numerically. § CONCLUSIONThe BLST electrodynamics is a unified framework to study Maxwell, Born-Infeld, Bialynicki-Birula and ModMax theories not only theoretically but also experimentally. In this framework, we have investigated the TE_1 waves in a parallel-plate waveguide, including its phase velocity v_p, group velocity v_g and cutoff frequency ω'_1. All of them receive corrections from terms characterized by Born-Infeld constant T and ModMax constant γ. Hunting for such corrections in experiments would help us to test BLST electrodynamics. In our investigation, we have imposed a constant external magnetic field normal to the propagation direction of TE waves.In Sec. <ref>, the external magnetic field is perpendicular to the waveguide plates. In practice, this may be built by attaching opposite magnetic poles to the parallel plates, which are made of a paramagnetic or ferromagnetic good conductor of electricity. Under the condition of B'_0/A≥ω'+√(ω'^2-k_^2), we found the BLST corrections to v_p, v_g and ω'_1 of TE_1 waves have the same forms as corrections in Born-Infeld electrodynamics, except for that T is replaced by=e^γT. Consequently, in this case, the experiment cannot distinguish between BLST theory and Born-Infeld theory, but it can put a lower or upper limit on e^γT, where the constraints on γ and T are correlated.In Sec. <ref>, the external field is parallel to the plates. This configuration is more accessible and has richer phenomena. We found the Born-Infeld corrections to v_p, v_g and ω'_1 of TE_1 waves are different from the ModMax corrections. The ModMax corrections depend simply on the ratio of external magnetic field strength B'_0 to transverse electric wave amplitude A, while the Born-Infeld corrections are linear combinations of B'^2_0 and A^2. Specifically, if the external magnetic field is turned off, then the 𝒪(T^-1) corrections will vanish but the 𝒪(γ) corrections will survive. Therefore, in this configuration, the experiment can disentangle the constraints on γ and T, suited to test the BLST electrodynamics.§ ACKNOWLEDGMENTSThis work is supported by the National Natural Science Foundation of China (Grant Nos. 11834003 and 91836103).99 Jackson:1999ny J. D. Jackson, “Classical Electrodynamics,” Wiley, New York, 1999.Sorokin:2021tge D. P. Sorokin,Fortsch. Phys. 70, no.7-8, 2200092 (2022)[arXiv:2112.12118 [hep-th]].BialynickiBirula:1984tx I. Bialynicki-Birula, “Nonlinear Electrodynamics: Variations on a theme by Born and Infeld”, in Quantum Theory Of Particles and Fields: birthday volume dedicated toJan Łopuszański, eds B. Jancewicz and J.Lukierski, pp 31-48, (World Scientific, 1983).Born:1934gh M. Born and L. Infeld,Proc. Roy. Soc. Lond. A 144, no.852, 425-451 (1934) Bialynicki-Birula:1992rcm I. Bialynicki-Birula,Acta Phys. Polon. B 23, 553-559 (1992)Bandos:2020jsw I. Bandos, K. Lechner, D. Sorokin and P. K. Townsend,Phys. Rev. D 102, 121703 (2020)[arXiv:2007.09092 [hep-th]].Russo:2022qvz J. G. Russo and P. K. Townsend,JHEP 01, 039 (2023)[arXiv:2211.10689 [hep-th]].Ferraro:2007ec R. Ferraro,Phys. Rev. Lett. 99, 230401 (2007)[arXiv:0710.3552 [hep-th]].Ferraro:2003ia R. Ferraro,Phys. Lett. A 325, 134-138 (2004)[arXiv:hep-th/0309185 [hep-th]].Manojlovic:2020ndn N. Manojlovic, V. Perlick and R. Potting,Annals Phys. 422, 168303 (2020)[arXiv:2006.09053 [math-ph]]. | http://arxiv.org/abs/2312.16031v1 | {
"authors": [
"Yang Shi",
"Qinyan Tan",
"Towe Wang"
],
"categories": [
"physics.class-ph"
],
"primary_category": "physics.class-ph",
"published": "20231226124329",
"title": "Transverse electric waves in Bandos-Lechner-Sorokin-Townsend nonlinear electrodynamics"
} |
Near-Optimal Fault Tolerance for Efficient Batch Matrix Multiplication via an Additive Combinatorics LensKeren Censor-HillelTechnion [email protected] Yuka [email protected] Pedro SotoOxford University [email protected]===================================================================================================================================================================================== Fault tolerance is a major concern in distributed computational settings. In the classic master-worker setting, a server (the master) needs to perform some heavy computation which it may distribute to m other machines (workers) in order to speed up the time complexity. In this setting, it is crucial that the computation is made robust to failed workers, in order for the master to be able to retrieve the result of the joint computation despite failures. A prime complexity measure is thus the recovery threshold, which is the number of workers that the master needs to wait for in order to derive the output. This is the counterpart to the number of failed workers that it can tolerate. In this paper, we address the fundamental and well-studied task of matrix multiplication. Specifically, our focus is on when the master needs to multiply a batch of n pairs of matrices. Several coding techniques have been proven successful in reducing the recovery threshold for this task, and one approach that is also very efficient in terms of computation time is called Rook Codes. The previously best known recovery threshold for batch matrix multiplication using Rook Codes is O(n^log_23)=O(n^1.585).Our main contribution is a lower bound proof that says that any Rook Code for batch matrix multiplication must have a recovery threshold that is at least ω(n). Notably, we employ techniques from Additive Combinatorics in order to prove this, which may be of further interest. Moreover, we show a Rook Code that achieves a recovery threshold of n^1+o(1), establishing a near-optimal answer to the fault tolerance of this coding scheme.empty § INTRODUCTION The master-worker computing paradigm has been extensively studied due to its ability to process large data-sets whose processing time is too expensive for a single machine. To address this, the main machine (master) may split the computation among m workers. Various tasks have been studied, such as matrix multiplication, gradient descent, tensors/multilinear computation, and more <cit.>. A common issue in such settings is the need to cope with failures, so that faulty workers do not prohibit the completion of the computational task at hand. Indeed, faults are a major concern in many distributed and parallel settings, and coping with them has been studied for several decades <cit.>. In the master-worker setting, the main complexity measure that addresses the robustness of an algorithm to faults is its recovery threshold, defined as follows. Definition of Recovery Threshold <cit.>. The recovery threshold of an algorithm in the master-worker setting is the number of workers the master must wait for in order to be able to compute its output. A low recovery threshold means that the algorithm can tolerate a higher number of faults. For example, if the master splits the computation to m disjoint pieces and sends one to each worker then it cannot tolerate even a single failure and needs to wait for all m workers. If it splits the computation to m/2 pieces and sends each piece to 2 workers then it only needs to wait for m-1 machines, and thus can tolerate one failure. Notice that with 2 failures this approach does not work, as the 2 failed workers could be responsible for the same piece of computation. Indeed, such simple replication approaches are inefficient in terms of their robustness.An efficient way for handling such faults is to use coding techniques. Coding has been an extremely successful technique in various distributed computations (see, e.g., <cit.>) which mostly address communication noise). In particular, coding schemes have been successfully proposed for the task of matrix multiplication in the master-worker setting, which is the focus of our work. Multiplying matrices is a vital computational task, which has been widely studied in distributed settings <cit.> and in parallel settings (see, e.g., <cit.> and many references therein).Specifically, we address batch matrix multiplication in the master-worker setting, in which the master haspairs of matrices, {(A_i,B_i)}_0≤ i≤-1, whose n products {A_i· B_i}_0≤ i≤-1 it needs to compute. All matrices A_i have the same dimensions, as well as all matrices B_i.To see why coding is useful, consider the above naïve approach for coping with faults by replication as applied for batch matrix multiplication. For simplicity, suppose that m is a multiple of n, i.e., m = λ n for some integer λ. In the replication approach, the master sends each sub-task A_i · B_i to λ workers. It is straightforward to see that the recovery threshold here is _replication (n) ≥ (n-1)λ +1 = m-λ+1, as λ failures could be of workers that are responsible for the same product, where for any given family ℱ of algorithms for computing the n products, we denote by _ℱ(n) the best recovery threshold of an algorithm in ℱ. In particular, this means that with simple replication, the recovery threshold is linear in the number m of machines.However, it is well-known that with coding it is possible to do better. Consider the case n=2.With replication and m = 2 λ workers, we have that _replication (2) ≥λ +1. In contrast, consider the fault tolerance provided by the following simple coding scheme, in which we define à (x) : = A_0 + A_1 x and B̃ (x) : = B_0 + B_1 x. The master sends to each worker w the distinct values Ã(x_w), B̃(x_w), and each worker w then returns the product Ã(x_w) ·B̃(x_w) to the master. For any 3 workers u,v,w, it holds that[ x_u^0 x_u^1 x_u^2; x_v^0 x_v^1 x_v^2; x_w^0 x_w^1 x_w^2; ][ A_0 B_0; A_0 B_1 + A_1 B_0; A_1 B_1; ] =[ Ã(x_u) ·B̃(x_u); Ã(x_v) ·B̃(x_v); Ã(x_w) ·B̃(x_w); ],and since x_u ≠ x_v ≠ x_w then the first matrix is invertible. Thus, the master can retrieve A_0· B_0 and A_1· B_1 using the results of any 3 workers, giving that _coded (2)≤ 3.In particular, the coding scheme achieves a fault tolerance of m-3/m =1 - 3/m, which tends to 100% as m grows. However, the replication scheme can only achieve a fault tolerance of 2λ - (λ + 1)/2λ= 1/2 - 1/m, which is bounded by 50% even as m grows. Rook Codes for batch matrix multiplication. We consider the Rook Codes given by the following form, proposed in <cit.>. Denote byà (x) = ∑_i ∈ [] A_i x^p_i,B̃ (x) = ∑_j ∈ [] B_j x^q_jtwo matrix polynomials that are generated by the master, for some sequences of non-negative integers P=(p_0,…,p_n-1) and Q=(q_0,…,q_n-1). We denote:L_P,Q:=|P+Q|. For 0≤ w≤ m-1, the master sends to worker w the coded matrices à (x_w) and B̃ (x_w), where x_w≠ x_w' for every pair of different workers w≠ w'. Worker w then multiplies the two coded matrices and sends their product back to the master. Note thatÃ(x)·B̃(x)=(∑_i∈[n]A_ix^p_i)(∑_j∈[n]B_jx^q_j)=∑_i,j∈[n]A_iB_jx^p_i+q_j.Note that while the degree of the above polynomial may be max{P+Q}, the number of its non-zero coefficients is bounded by L_P,Q =|P+Q|, which may be significantly smaller. Thus, when the master receives back L_P,Q point evaluations of Ã(x)·B̃(x), it can interpolate the polynomial. To be able to extract the coefficients A_k· B_k for all k∈ [n] out of the polynomial, we need A_k· B_k to be the only coefficient of x^p_k+q_k in the polynomial. That is, the following property is necessary and sufficient. [The decodability property]For all i,j,k ∈ [n], p_k+q_k =p_i+q_j if and only if i=k,j=k.Throughout the paper, we refer to codes given by Equation <ref> that satisfy Property <ref> as Rook Codes for batch matrix multiplication in the master-worker setting.To summarize, given two sets of integers P,Q of size |P|=|Q|=n for which Property <ref> holds, we have that _Rook-Codes(n) = min_P,Q{L_P,Q} (the minimum is taken here over all P,Q that satisfy Property <ref>). In order to derive the recovery threshold of Rook Codes for batch matrix multiplication, our goal is then to bound L_P,Q for all such P,Q, or in other words, to bound |P+Q|. How robust can Rook Codes be? A simple Rook Code can be derived from the Polynomial Codes of <cit.>, which are designed for multiplying a single pair of matrices. This would giveà (x) = ∑_i ∈ [] A_i x^i and B̃ (x) = ∑_j ∈ [] B_j x^nj, i.e., we have p_i=i and q_j=nj for every i,j∈ [n]. This would immediately bound the recovery threshold of Rook Codes by_Rook-Codes(n) ≤ ((n-1)+n(n-1))+1 = n^2, which is already a significant improvement compared to the recovery threshold of replication since it does not depend on m but rather only on the size of the input.In <cit.>, an intuition was given for why the task is non-trivial, in the form of a lower bound that says that with P = (0,1,2,...,n-1) (i.e., with à (x) = ∑_i ∈ [] A_i x^i), any choice of Q has L_P,Q≥ n^2/2, which implies that in order to obtain a recovery threshold that is sub-quadratic in n, one cannot use such simple values for P.But for other values of P,Q, Rook Codes can indeed do better, as shown by <cit.>, who provided P and Q with L_P,Q = O(n^log_23), implying that _Rook-Codes() = O(n^log_23)= O(n^1.585). The parameters chosen for this are P = Q = { v_0+ v_1 · 3 + v_2· 3^2 + ... + v_ℓ-1· 3^ℓ-1 | v_i∈{0,1}}, where ℓ=log_2n, assuming that ℓ is an integer (an assumption that can easily be removed by standard padding). One can verify that this choice yields a Rook Code (i.e., that Property <ref> holds), and that the recovery threshold is _Rook-Codes(n) = O(3^ℓ) = O(3^log_2n) = O(2^log_23 + log_2n) = O(n^log_23).This significant progress still leaves the following question open: Question: Are there Rook Codes for batch matrix multiplication in the master-worker setting with a recovery threshold that is linear in n? §.§ Our Contribution In this paper, we show that no Rook Code for batch matrix multiplication in the master-worker setting can get a strictly linear in n recovery threshold, but that we can get very close to such value. Lower Bound. Our main contribution is showing a super-linear lower bound on the recovery threshold of Rook Codes for batch matrix multiplication in the master-worker setting. [Lower Bound]theoremThmLB Every Rook Code for batch matrix multiplication in the master-worker setting has a super-linear in n recovery threshold. That is, _Rook-Codes(n) = ω(n).Our main tool for proving Theorem <ref> is the following theorem in additive combinatorics, whose proof is the key technical ingredient of our contribution (Section <ref>).theoremThmLBviaACLet P={p_0,p_1,…, p_n-1} and Q={q_0,q_1,…, q_n-1} be finite subsets of the integers, such that 0∈ P and 0∈ Q. Suppose that for all 0≤ i,j,k ≤ n-1, p_k+q_k =p_i+q_j if and only if i=k,j=k. Then |P+Q|=ω(n).It is easy to see that Theorem <ref> is sufficient for proving Theorem <ref>, as follows. Let P={p_0,p_1,…, p_n-1} and Q={q_0,q_1,…, q_n-1} be sets of integers that define a Rook Code for batch matrix multiplication in the master-worker setting. In particular, Property <ref> holds, and hence by Theorem <ref>, we have that |P+Q|=ω(n). This implies that L_P,Q = |P+Q| = ω(n) and thus _Rook-Codes(n) = min_P,Q{L_P,Q} = ω(n), as claimed. Upper Bound. Our second contribution is that there exist Rook Codes for batch matrix multiplication in the master-worker setting that asymptotically get very close to the bound in <Ref>. We prove the following (Section <ref>). [Upper Bound]theoremThmUBThere exist Rook Codes for batch matrix multiplication in the master-worker setting with a recovery threshold of ne^O(√(log n)). That is, _Rook-Codes(n) = n^1+o(1).§.§ Related Work The use of coding for matrix-vector and vector-vector multiplication in the master-worker setting was first established concurrently in <cit.>. Shortly after, efficient constructions were given for general matrix-matrix multiplication <cit.>.In particular, <cit.> established the equivalence of the recovery threshold for general partitions to the tensor rank of matrix multiplication.Further, <cit.> considered coding for gradient descent in a master-worker setting for machine learning applications.Batch matrix multiplication was studied using Rook Codes <cit.>, Lagrange Coded Computation (LCC) <cit.>, and Cross Subspace Alignment (CSA) codes <cit.>. Follow-up works about LCC consider numerically stable extensions <cit.> and private polynomial computation <cit.>. There have also been follow-up works on CSA codes which include extending them to obtain security guarantees <cit.>.It is important to mention that LCC and CSA codes have recovery thresholds of 2n-1, and so, at a first glance, it may seem that LCC or CSA Codes already achieve a linear recovery threshold and thus outperform Rook Codes. However, an important aspect in which the Rook Codes that are considered in our work are powerful is in their computational complexity.The work in <cit.> supplies experimental evidence that the encoding step for Rook Codes is computationally more efficient. In Section <ref>, we give detailed descriptions of LCC and LCA codes, and provide some mathematical intuition as to why this may be the case.In <cit.>, additive combinatorics is used to construct and analyze coded matrix multiplication algorithms.However, there are two key differences between these works and ours. The first is that they consider the special case of outer product of two block matrix vectors, which corresponds to batch matrix multiplication but with dependencies between the pairs of matrices which can be exploited; in particular,their constraint is simpler than the decodability property we need (Property <ref>). The second difference is that they consider security of the computation. Hence, our results are incomparable.§.§ RoadmapThe following section shortly contains the notation for the additive combinatorics setup. Section <ref> contains the proof of <Ref>, which we proved above to imply the lower bound of <Ref>. Section <ref> proves the upper bound of <Ref>. We conclude in Section <ref> with a discussion of the computational efficiency that motivates our study of the recovery threshold of Rook Codes.§ NOTATIONFor two sets of integers A and B, the set A+B is defined asA+B := {a+b | a∈ A, b∈ B},and the set A-B is defined asA-B := {a-b | a∈ A, b∈ B}.Note that when A=B we still have the same definitions, so for example A-A = {a-b | a∈ A, b∈ A}, which is the set of all integers that are a subtraction of one element of A from another. For a set of integers A and an integer k, the set kA is defined askA := A + A + … + A = {a_0+a_1+…+a_k-1 | ∀ 0≤ i ≤ k-1, a_i ∈ A}.§ AN Ω() LOWER BOUND ON THE RECOVERY THRESHOLDIn this section we prove that no Rook Code for batch matrix multiplication can achieve a strictly linear recovery threshold.* As proven in the introduction, the following theorem implies <Ref>, and the remainder of this section is dedicated to its proof. * <Ref> is closely related to Freiman's Theorem in <cit.> (see also <cit.>), a deep result in Additive Combinatorics which essentially states that a set P has a small doubling constant (i.e., the size of |P+P| is only a constant multiple of |P|) if and only if P is similar to an arithmetic progression, for a well defined notion of similarity. Our theorem is comparable to Freiman's Theorem in the following sense. The contrapositive version of <Ref> states that if |P+Q| is a constant multiple of |P|, then P+Q looks like an arithmetic progression (in a different way compared to Freiman's similarity notion), in the sense that there exists i,j,k with (i,j)≠ (k,k) such that p_k+q_k = p_i+q_j (note that when P=Q, this implies that P contains a 3-term arithmetic progression, or 3-AP for short). In order to prove Theorem <ref>, we will invoke the following theorems from Additive Combinatorics. If A,B,C are finite subsets of integers, then|A||B-C|≤ |A-B||A-C|. Let A and B be finite subsets of integers and let K be a constant satisfying the following inequality: |A+B|≤ K|A|. Then for all integers r,ℓ≥ 0,|rB-ℓ B|≤ K^r+ℓ|A|. We will also need the following lemma. Let P,Q⊆ℤ such that |P|=|Q|=n and |P+ Q|≤ K|P|=Kn for some constant K. Then|P+ Q- P- Q|≤ K^7 n.First, using Ruzsa's Triangle Inequality from <Ref> with A=P,B=P-P, and C=Q-Q, we get that |P+ Q-P-Q|≤ |P-P+P||P+Q-Q|/|P|. Next, using the Plünnecke-Ruzsa Inequality from <Ref>, where A=Q, B=P, r=2,ℓ =1 and the condition |P+Q|≤ K|Q| (recall that |P|=|Q|), we get that |P-P+P|=|2P-P|≤ K^2+1|Q|=K^3|P|.Using symmetry of P and Q, we can similarly prove that |Q-Q+Q|≤ K^3|Q|.Hence, combining Equation (<ref>) and Equation (<ref>) together, we get that |P+ Q-P-Q| is bounded by|P+ Q-P-Q|≤ |P-P+P||P+Q-Q|/|P|≤ K^3|P||P+Q-Q|/|P|=K^3|P+Q-Q|.Using Ruzsa's Triangle Inequality again, this time with A=Q, B=-P and C=Q-Q, and then using the fact that |Q+Q-Q|≤ K^3 |Q|, we get that:|P+Q-Q|=|-P+Q-Q|≤ |P+Q||Q+Q-Q|/|Q|≤ |P+Q|K^3 ≤ K^4n. Plugging this into the bound on |P+ Q-P-Q| gives that|P+Q-P-Q|≤ K^3 |P+Q-Q|≤ K^7 n, as claimed. Now we are ready to prove Theorem <ref>, for which we will also use the following result:Let G=G(V,E) be a graph which contains at most o(|V|^3) triangles. Then it is possible to remove o(|V|^2) edges from G to obtain a graph which is triangle-free.Overview: First we will assume for contradiction that there exists some constant K such that for all n, there exists P,Q with |P|=|Q|=n such that |P+Q|≤ Kn and P,Q satisfy <Ref>. From this assumption, we will construct a tripartite graph G=(V,E) with V=(X,Y,Z) such that every edge is in exactly one triangle in G. Each of the three vertex sets will have size N=|P+Q-P-Q| and the number of edges between each pair of sets is at least Nn/4. Because every edge is in exactly one triangle, we have that the number of triangles is exactly |E|/3, and thus it is o(|V|^3). By using the Triangle Removal Lemma from <Ref>, it follows that the number of edges that need to be removed to make the graph triangle-free is at most o(|V|^2)=o(N^2). Since this naturally also bounds the total number of edges |E|, which is at least 3Nn/4, we obtain that 3Nn/4≤ o(|V|^2)=o(N^2) andhence n=o(N). But from <Ref>, N is bounded by a constant multiple of n, hence we get that n=o(N)=o(n) which is a contradiction. The main technical challenge of this proof is in constructing a map between the set |P+Q| and vertices of the graph in such a way so that the condition of Property <ref>, that p_k+q_k =p_i+q_j if and only if i=k,j=k, translates into the condition that every edge is in exactly one triangle. Therefore, the first part of this proof will be about defining a map ψ with the desired properties, as presented in the proof of <cit.>. Assume for contradiction that |P+Q|=K|P|. Let us call N=|P+Q-P-Q|. First, we construct ϕ:ℤ→ [q] for a primeq>max(P+Q-P-Q), such that if x∈ P+Q-P-Q and ϕ(x)=0, then x=0 (we do not simply define ϕ(x)=x q, since we need ϕ to satisfy additional properties). We do this in the following way.For each λ∈[q], consider the map ϕ_λ: ℤℤ/qℤℤ/qℤ[q].It is important to note that (mod q) and ×λ are group homomorphisms. Therefore, as the operation mod q restricted to [q] is the identity map, ϕ_λ has the following property <cit.>:If a_i,x_i∈ℤ and 0 ≤∑_i=1^n a_i ϕ_λ(x_i)<q then ∑_i=1^n a_i ϕ_λ(x_i)=ϕ_λ(∑_i=1^n a_i x_i). For each x≠ 0, ϕ_λ (x) takes on the values of [q] with equal probability over all λ∈ [q]. Therefore, for each x∈ (P+Q-P-Q)∖{0}, the probability over all λ∈ [q] that N divides ϕ_λ(x) is at most 1/N. As there are N-1 elements in (P+Q-P-Q)∖{0}, by the union bound, we get:_λ∈ [q](∀ x∈ (P+Q-P-Q)∖{0}, Ndoes not divide ϕ_λ (x))≥1- (∑_x∈ (P+Q-P-Q)∖{0}_λ∈ [q](N divides ϕ_λ (x)))≥1- N-1/N=1/N>0.This implies that there exists λ∈ [q] such that for all x∈ P+Q-P-Q∖{0} it holds that N does not divide ϕ_λ (x). We fix λ to be such a constant, and set ϕ=ϕ_λ. Now, we construct a set T'⊆ [n] so that we have the following propertyFor all i,j,k∈ T', we have: |ϕ(p_k)-ϕ(p_i)+ϕ(q_k)-ϕ(q_j)|≤ q.In order to construct T', we first pick one of the two intervals [0,q-1/2] and [q+1/2,q-1] to be I so that T={ k∈{1… n}:ϕ(p_k)∈ I} and |T|≥ n/2. We then pick an interval J to be one of[0,q-1/2] and [q+1/2,q-1] so that T'={k∈ T:ϕ(q_k)∈ J} and |T'|≥ |T|/2≥ n/4.We now compose ϕ with (mod N) to obtain a mapping ψ, as follows.ψ: ℤ{0,1… q-1}ℤ/Nℤ. Now, we construct a tripartite graph G whose vertex set is V=X∪̇ Y∪̇ Z,where X,Y,Z are all copies of ℤ/Nℤ. The edges of the graph are defined as follows. * (x,y)∈ X× Y is in E(G) if and only if there exists i∈ T' such that y-x= ψ(p_i).* (y,z)∈ Y× Z is in E(G) if and only if there exists i∈ T' such that z-y= ψ(q_i).* (x,z)∈ X× Z is in E(G) if and only if there exists i∈ T' such that z-x=ψ(p_i)+ψ(q_i) mod N.Note that |V(G)|=|X|+|Y|+|Z|=3N, and |E(G)|=3N|T'|≥3/4Nn. If x,y,z form a triangle, then there exist i,j,k such that y-x=ψ(p_i), z-y=ψ(q_j), and z-x=ψ(p_k)+ψ(q_k). We claim that in this case it must hold that i=j=k, which we prove as follows. Since (z-y)+(y-x)=z-x, we have that ψ(p_k)+ψ(q_k)=ψ(p_i)+ψ(q_j), which means that ψ(p_k)+ψ(q_k)-ψ(p_i)-ψ(q_j)=0. Since ψ applies a (mod N) function to ϕ, this means that N divides ϕ(p_k)+ϕ(q_k)-ϕ(p_i)-ϕ(q_j). By <Ref> and our choice of T', it holds that ϕ(p_k)+ϕ(q_k)-ϕ(p_i)-ϕ(q_j) ≤ q, and hence by <Ref>, we have that ϕ(p_k)+ϕ(q_k)-ϕ(p_i)-ϕ(q_j) = ϕ(p_k+q_k-p_i-q_j). This implies that N divides ϕ(p_k+q_k-p_i-q_j), which only holds if p_k+q_k-p_i-q_j = 0 due to our choice of λ. Since such a triangle implies i=j=k, this shows that each edge is in exactly one triangle hence, and so Nn/4 edges need to be removed to make G triangle free. The number of triangles is Nn/4=O(N^2)=o(N^3) and therefore, by the Triangle Removal Lemma from <Ref>, one can remove at most o(N^2) edges to make G triangle free. This implies that Nn/4=o(N^2), and hence n=o(N). However, we proved in <Ref> that N≤ K^7n, and hence n≠ o(N) which is a contradiction. Therefore, our initial assumption does not hold, i.e., there is no constant K such that |P+Q|≤ K|P| for all n=|P|. This implies that |P+Q|=ω(n) as claimed.§ AN E^O(√(LOGN)) UPPER BOUND FOR THE RECOVERY THRESHOLDIn this section, we show how to obtain Rook Codes for batch matrix multiplication in the master-worker setting with a near-optimal recovery threshold. * To prove <Ref>, we show that there exist sets P,Q with |P+Q|=ne^O(√(log n)) such that the decodability property (<Ref>) holds. In order to show this, we use the following known construction of a set which does not contain any 3-term arithmetic progression, defined as follows. A set A={a_1,…,a_ℓ} is an ℓ-term arithmetic progression (ℓ-AP) if there is a value d such that a_i = a_1 + (i-1)d, for all 2≤ i ≤ℓ. Behrend <cit.> showed a construction of a set which does not contain any 3-AP, whose maximum element is bounded by a value which will be useful for us.For all n, there exist a set A of distinct positive integers of size |A|=n which does not contain any 3-term arithmetic progressions (3-AP), and such that max_a∈ A{a} = n e^O(√(log n)). Note that the proof of <Ref> shows that there exists a 3-AP-free subset A of {1,2,… N } of size at least N^1-2√(2log 2) +ϵ/√(log N) elements. By choosing n=N^1-2√(2log 2) +ϵ/√(log N),we have that logn=Θ(log N). Therefore, expressing N in terms of n we get that N=ne^O(√(log n)), hence for any n, we can construct a 3-AP-free set with n elements such that max_a∈ A{a} = n e^O(√(log n)), which is the form of the theorem presented here. With this construction, we can easily prove our upper bound on the recovery threshold of Rook Codes, as follows. Let c be a suitable constant such that for each n, there is a set A_n⊆ [ne^c√(log n)] of size n which contains no 3-AP, as obtained by Behrend's construction in <Ref>. Denote A_n={a_1,… , a_n }. Let P,Q=A where p_i=q_i=a_i. This choice of P and Q satisfies the decodability property (<Ref>), because if 2p_i=p_k+q_j then p_k,p_i,q_j is a 3-AP, hence p_k,p_i,q_j must all be equal. Furthermore, P+Q⊆{1,2… 2ne^c√(log n)}, with a size of ne^O(√(log n)). Therefore, _Rook-Codes(n)=ne^O(√(log n)).§ THE COMPUTATIONAL EFFICIENCY OF ROOK CODESWe provide here the mathematical background that could explain the experimental results of <cit.>, which suggest that Rook Codes may have faster computational times. Coding for batch matrix multiplication. In <cit.>, the method of Lagrange Coding Computation (LCC) has been suggested for batch matrix multiplication.In this approach, the master defines two polynomials Ã(x) and B̃(x), by invoking point evaluation over the input matrices. In more detail, Ã(x) is defined to be the unique polynomial of degree at most n-1, which is interpolated from n points z_0,…,z_n-1 whose values are A_0,…,A_n-1, respectively. That is, Ã(z_i)=A_i for all 0≤ i ≤ n-1. Similarly, B̃(x) is interpolated so that B̃(z_i)=B_i for all 0≤ i ≤ n-1. Then, for 0≤ w ≤ m-1, the master sends to worker w the coded matrices Ã(x_w) and B̃(x_w), for some value x_w. Each worker w multiplies its two matrices and sends their product back to the master. Note that (÷B̃)(x)=Ã(x)·B̃(x) is a polynomial of degree at most 2n-2, for which (÷B̃)(z_i)=A_i· B_i for all 0≤ i ≤ n-1. Thus, once the master receives back Ã(x_w)·B̃(x_w) from 2n-1 workers, it can extract A_i· B_i for all 0≤ i ≤ n-1 by interpolating the polynomial (÷B̃)(x) and evaluating it at z_0,…,z_n-1. This yields _Lagrange-Codes(n)≤ 2n-1.In <cit.>, a method of Cross Subspace Alignment (CSA) Coding for batch matrix multiplication is suggested, in which rational functions are used.More precisely, for a vector z = (z_0,...,z_-1) ofdistinct values in a field, let f_z (x) = ∏_i ∈[] ( z_i - x) = ( z_0 - x)· ... · ( z_-1 - x), and define the rational encoding functions as à (x ) = f_z (x)(∑_i ∈ []1/z_i - xA_i), and B̃ (x ) =∑_i ∈ []1/z_i - xB_i. Then it holds that(÷B̃) (x )= à (x) B̃ (x ) = ∑_i ∈ []c_i(z⃗)/z_i - xA_iB_i + ∑_i ∈ [-1] x^i N_i, where each c_i(z⃗) does not depend on x and each N_i denotes a noise (matrix) coefficient that can be disregarded. The master computesà (x_w) and B̃(x_w) for some distinct x_w that satisfy {z_i}_i ∈ [n]∩{x_w}_w∈ [m] = ∅and sends it to worker w, for each 0≤ w≤ m-1. Each worker w then multiplies its two matrices and sends their product back to the master. Note that (÷B̃)(x)=Ã(x)B̃(x) is a rational function with at most 2n-1 zeros and n poles[For any p=2n-1 points {(x_i,y_i)}_i∈[p] and n numbers {(z_i)}_i∈[n] such that {z_i}_i ∈ [v]∩{x_k}_k∈ [p] = ∅, there exists a unique rational function h (x)= f(x)/g(x) such that h(x_i) = y_i, g(z_i) = 0, deg(f) = 2n-1, and deg(g) = n. Notice that this is a natural generalization of polynomial interpolation (see, e.g., <cit.>).],and so the master can perform rational interpolation from the values sent to it from any 2n-1 workers. Then the master retrieves A_i· B_i for all 0≤ i ≤ n-1, by taking the coefficient of c_i(z⃗)/z_i-x. This yields _CSA-Codes(n)≤ 2n-1. A comparison with Rook Codes.As mentioned, at a first glance, it may seem that LCC or CSA Codes already achieve the desired linear recovery threshold. However, Rook Codes are powerful in their computational complexity, which motivates our work on bounding their recovery threshold. To see this, let us be more specific about the parameters of the setting, as follows. For all 0≤ i≤ n-1, denote by χ×ζ the dimensions of the matrix A_i and by ζ×υ the dimensions of the matrix B_i. Encoding. For the encoding time, all three methods(Rook Codes, LCC, and CSA Codes) must use some fast multipoint evaluation algorithm. In particular, Rook Codes and LCC use multipoint evaluation of a polynomial, and CSA Codes use a multipoint evaluation of rational functions, i.e., Cauchy matrix multiplication <cit.>. All three methods can compute the m values {Ã(x_w), B̃(x_w) }_w ∈ [m] in time O((χζ + ζυ) max{ m,d}log^cmax{ m,d}) = O((χ+ υ)ζ mlog^cm), where c is some constant and d is the degree of the polynomials A(x),B(x) in Rook Codes and LCC, or the total number of their roots and poles in CSA <cit.>.Decoding in Rook Codes. For the decoding time, in Rook Codes, the master can decode in time O(χυ Rlog^c(R)), for interpolating a polynomial over χ×υ matrices out of R= _Rook-Codes(n) points.This gives that the total computational time (for encoding and decoding) at the master is O ( (χ+ υ)ζ m log^c (m) + χυ Rlog^c(R)). This decoding time is much smaller than the encoding time in some cases, for example when χ=υ and ζ = ω(χ Rlog^cR/(mlog^cm)). Since in such cases the encoding is the more expensive procedure, this motivates delegating the encoding to the workers in scenarios in which the master should do as little computation as possible (for example, when it is responsible for more than a single task).This is done by sending all matrices A_i and B_i for 0≤ i ≤ n-1 to all workers, and having each worker w compute A(x_w), B(x_w) (the values in P,Q can be either also sent by the master or be agreed upon as part of the algorithm, depending on the setup). While this clearly has an overhead in terms of communication, in some cases this overhead can be very small compared to the original schemes in which encoding takes place at the master – as an example, if all workers are fully connected then the master can send A_i, B_i to a different worker w_i for each i, and each w_i can forward these two matrices to all other workers concurrently.The benefit of delegating the encoding to the workers is that it allows the master to avoid any computation for encoding, and each worker w only has to evaluate the polynomial at its own x_w value, which needs at most (χ+υ)ζ n + O(n log^1/2(n)) multiplications (see Section <ref>).The O(n log^1/2(n)) term does not depend on χ, υ, ζ and thus, is negligible when the matrices have large dimensions. Decoding in LCC and LCA. The crucial difference between Rook Codes and LCC or CSA Codes, is that in Rook Codes, worker w can locally encode Ã(x_w),B̃(x_w) using division-free algorithm (i.e., an algorithm using only + or ·, but not /); this stands to LCC and CSA Codes, which must perform divisions (by definition). Furthermore, it is not known how to efficiently delegate the encoding procedure to the workers in LCC since the encoding functions areÃ(x) = ∑_i ∈ n A_i∏_j ∈ [n] ∖{i}x - z_j/z_i - z_j and B̃(x) = ∑_i ∈ n B_i∏_j ∈ [n] ∖{i}x - z_j/z_i - z_j and, in particular, since there are O(n^2) many products of the form 1/(z_i- z_j), any naïve algorithm must take quadratic time, and converting it to coefficient form for using Horner's scheme (see Section <ref>) incurs a O(nlog^c n ) overhead.Thus, the encoding step for LCC has complexity O((χ+ υ)ζ nlog^c n ), by <cit.>.Moreover, divisions are known to be more expensive (for both LCC and CSA) <cit.>, thus they increase the overhead at each worker. In particular, the current theoretical bound (using Newton iterations) on division in terms of multiplication for number fields is 3 <cit.>. Notice that evaluating the encoding function B̃(x) = ∑_i∈ [n] B_i 1/z_i-z at a point x_w is very unlikely to use less than n(χ + υ) ζ divisions since even if we give it a common denominator, i.e., B̃(x) = (∏_i∈ [n]1/z_i-z)∑_i∈ [n] B_i(∏_i∈ [n]z_i-z)1/z_i-z, we would now have a super-linear increase in multiplications in the numerator. Furthermore, the evaluation of B̃ (x_w) is equivalent to computations involving Cauchy matrices, an old computational problem that has been studied extensively and conjectured to have a super-linear lower bound in the number of points plus poles (see <cit.>). Furthermore, division-free algorithms are often preferred because they tend to avoid issues of division-by-0 and false-equality-with-0 <cit.>. To summarize, there are settings in which Rook Codes are preferable over LCC and LCA codes, which motivates bounding their recovery threshold to obtain high robustness to faults.§.§ Bounding the Encoding Time of Our Construction of Rook Codes We provide here the promised analysis of the encoding time of our construction. Let p_0<p_1<...< p_n-1 and q_0<q_1<...< q_n-1 be increasing orderings of the elements in P and Q, respectively, and let δ(P,Q) equal the number of multiplications needed to compute the values {x^p_i - p_i-1} _i ∈ [n]∪{x^q_i - q_i-1} _i ∈ [n], where p_-1=q_-1=0. Then, given A_i,B_i for all i, each worker w can locally encode its values A(x_w) and B(x_w) using at most δ(P, Q) + (χ+ υ)ζ nmultiplications.Each worker w uses Horner's rule <cit.> to compute Ã(x_w), B̃(x_w). There are at most n non-zero coefficients in Ã(x), B̃(x), and thus Horner's rule takes the form A (x) = x^p_0(A_0 + x^p_1-p_0(A_1 + x^p_2-p_1(A_2+...))) for Ã(x) (and similarly for B̃(x)), where the values c_w,p,k = x_w^p_k- p_k-1 and c_w,q,k = x_w^q_k- q_k-1 can all be computed using successive squaring for at most a δ(P,Q) number of multiplications. More precisely, we have that f_n-1(x_w) := A(x_w) can be recursively computed as f_i+1(x_w) := c_w,p,n-1-i(A_n-1-i + f_i(x_w)),where f_0 = x_w^p_n-1-p_n-2A_n-1. Therefore, computing Ã(x_w) can be done using at most δ(P, Q) + (χ+ υ)ζ nmultiplications and (χ+ υ)ζ n additions. A similar argument applies for computing B̃(x_w). The following lemma bounds δ(P,Q), so that plugging this bound in Lemma <ref> gives our claimed time for encoding of our Rook Codes.δ(P,Q) = O(n log^1/2(n) ).Consider our Rook codes of Theorem <ref> that use Behrend's construction <cit.>. For some constants C_0,…,C_4, define d,ℓ and k as C_0log(n)/2ℓ-1/2 <d ≤C_1log(n)/2ℓ+1/2 , C_2 log^1/2(n) ≤ℓ = √(2 log (N)/log (2) )< C_3 log^1/2(n) ,and k ≤ℓ (d-1)^2 < C_4 log^3/2(n) .Then we have thatP = Q = { a_0(2d-1)^0 + ... +a_ℓ -1(2d-1)^ℓ-1|∑_i ∈ [ℓ] a_i^2 = k ,a_i ≤ d }. It takes at most O(log (d)) many steps to compute y = x^2d-1 for some value of x. Therefore, by the definition of P, the number of multiplications can be counted in terms of multiplying powers of y. In particular, if p_j - p_j-1 = ∑_i ∈ [ℓ]b_i^(j) (2d-1)^i then we have that x^p_j - p_j-1 = x^b_0y^∑_i ∈ [1,ℓ]ib_i^(j) = x^b_0∏_i ∈ [1,ℓ] (y^i)^b_i^(j).It takes at most O(ℓ) multiplications to compute y^1,...,y^ℓ-1 and it takes at most O(ℓ d) multiplications to compute all of the possible values (y^i)^b_i^j since the b^j_i values are bounded by O(d); therefore, each value in Equation (<ref>) takes at most ℓ many multiplications to compute (after the first three preprocessing steps).We conclude that the number of multiplications needed is bounded by O(n ℓ ) =O(n log^1/2(n) ) since Equation (<ref>) is computed at most n times. § DISCUSSIONThis paper analyzes batch matrix multiplication in the master-worker setting that is both efficient and robust, by showing nearly matching upper and lower bounds for the recovery threshold of Rook Codes. Our lower bound in Theorem <ref> is surprising since, in the classical case of polynomial codes <cit.> (or Reed-Solomon codes <cit.>), the communication rate is the same whether or not one defines the codewords as evaluations or as the coefficients of a polynomial.However, in the case of Rook codes and LCC we have that the recovery threshold (the measure of complexity that we care about here for batch matrix multiplication) is different between the evaluation codewords (Rook coding) and coefficient codewords (LCC). We state here some further research directions.Constructing 3-AP free sets. In the above setting, it is assumed that the number of pairs of matrices n is fixed. One may consider the non-uniform case, in which n is given as a parameter. In such a setting, the master would have to construct the appropriate 3-AP free sets according to n as part of its computation. For this, one would need a construction that is also computationally efficient. It may be an interesting direction for further research and, in particular, the construction of <cit.> could turn out useful. The latter constructions is also better in lower order terms of its size. Possible improvements of the lower bound. In additive combinatorics, problems such as finding 3-AP free sets are known as tri-colored sum-free problems <cit.>. Although these are similar to 3-AP free sets, the bound achieved is stronger for 3-AP free sets than for tri-colored sum free sets. If A⊂ [N] is a 3-AP free set, it is proven that |A|= O(max(A)exp(-Ω((log N)^β))) for some constant β>0 <cit.>. However, such strong qualitative bounds are not yet found for tri-colored sum free sets. In the language of <cit.>, one reason for this is that the equality a+b=2c, which is characteristic of the 3-AP problem, is preserved even after a,b,c is translated, whereas the equality p+q=r, which is characteristic of the tri-colored sum-free set is not preserved after translation (i.e., (p+t)+(q+t)≠ (r+t)). The proof for 3-AP free sets exploits this property by iteratively selecting sub-sequences of A, then scaling and translating this set to increase the density |A|/max{A} of the set. It is an open question whether the lower bound can indeed be increased to exactly match our upper bound.§ ACKNOWLEDGEMENTSThis project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 755839. We thank Yufei Zhao for his advice for an alternate and more concise proof of <Ref>. | http://arxiv.org/abs/2312.16460v1 | {
"authors": [
"Keren Censor-Hillel",
"Yuka Machino",
"Pedro Soto"
],
"categories": [
"cs.DC",
"cs.DS"
],
"primary_category": "cs.DC",
"published": "20231227081547",
"title": "Near-Optimal Fault Tolerance for Efficient Batch Matrix Multiplication via an Additive Combinatorics Lens"
} |
1]Ercília Sousa 2]Cristina Tablino-Possio 3]Rolf Krause 4,5]Stefano Serra-Capizzano [1]Department of Mathematics, University of Coimbra, Coimbra, Portugal (email: [email protected]). [2]Department of Mathematics, University of Milano Bicocca, Milano, Italy (email: [email protected]). [3]Center for Computational Medicine in Cardiology, University of Italian Switzerland, Lugano, Switzerland (email: [email protected]). [4]Department of Science and High Technology, University of Insubria, Como, Italy (email: [email protected]). [5]Department of Information Technology, Uppsala University, Uppsala, Sweden (email: [email protected]).Nonlocal problems with anti-symmetric and anti-reflective boundary conditions: a computational analysis and numerical comparisons [ January 14, 2024 ================================================================================================================================= In recent literature, for modeling reasons, fractional differential problems have been considered equipped with anti-symmetric boundary conditions. Twenty years ago the anti-reflective boundary conditions were introduced in a context of signal processing and imaging for increasing the quality of the reconstruction of a blurred signal/image contaminated by noise and for reducing the overall complexity to that of few fast sine transforms i.e. to O(Nlog N) real arithmetic operations, where N is the number of pixels. Here we consider the anti-symmetric boundary conditions and we introduce the anti-reflective boundary conditions in the context of nonlocal problems of fractional differential type. In the latter context, we study both types of boundary conditions, which in reality are similar in the essentials, from the perspective of computational efficiency,by considering nontruncated and truncated versions. Several numerical tests, tables, and visualizations are provided and critically discussed. § INTRODUCTION As a starting point, we refer to the following class of nonlinear fractional problems appearing in several works <cit.>, usually in d-dimensional domains. The precise formulation can be described by the following two equations(-Δ)^α/2 u(x) = f(u),x ∈^d,u(x',-x_d)=- u(x',x_d), x=(x',x_d)∈^d,with x'=(x_1,…,x_d-1), (-Δ)^α/2 being the fractional Laplacian with fractional order α∈ (1,2), and f(u)=-c(x)u^p, c,p≥ 0.When considering d=1, equation (<ref>) reduces to u(-x)=-u(x) with x_d≡ x and x' not present, while the operator in (<ref>) inis defined as(-Δ)^α/2 u(x) = 1/2cos(πα/2)[ _-∞^RLD_x^α u(x)+_x^RLD_∞^α u(x) ],according to <cit.>. Furthermore, the left Riemann-Liouville fractional derivative of order α, with 1<α<2 and x∈, is givenby_-∞^RLD_x^α u(x) = 1/Γ(2-α)∂^2/∂ x^2∫_-∞^xu(ξ,t)(x-ξ)^1-αdξ,while the right Riemann-Liouville fractional derivative of order α, with 1<α<2 and x∈, is givenby_x^RLD_∞^α u(x) = 1/Γ(2-α)∂^2/∂ x^2∫_x^∞u(ξ,t)(ξ-x)^1-αdξ.In a context of more general modeling, setting κ_α = ( cos(πα/2) )^-1, a class of more general weighted operators is consideredΔ_β^α/2 u(x) = κ_α1+β/2 _-∞^RLD_x^α u(x) +κ_α1-β/2 _x^RLD_∞^α u(x),with α∈ (1,2), β∈ (-1,1) so that Δ_β^α/2 becomes a linear convex combination of the given left and right derivatives, with Δ_0^α/2≡Δ^α/2 being their arithmetic mean for β=0. Therefore, inspired by the previous problem, in the current work we consider a similar problem in one-dimension that is time-dependent with anti-symmetric boundary conditions on x=a and x=b,∂ u/∂ t (x,t)=κ_αΔ_β^α/2 u(x,t),x ∈,t>0, u(-x+a,t) =- u(x+a,t), x <a, u(x+b,t) =- u(-x+b,t), x >b. The present work is organized as follows. In Section <ref> we describe the problem in a open domain, while in Sections <ref> and <ref> we enforce the two types of BCs connected with the presence of walls and we discuss the implication of the infinite presence of coefficients coming from the nonlocal nature of the underlying operators. Section <ref> deals with numerical experiments which look very interesting from a numerical viewpoint due to a very low complexity of O(Nlog N) real arithmetic operations, with N being the number of space grid points. Section <ref> is devoted to the study of the proper truncations for imposing that the resulting matrices lie in a proper matrix algebra, so further diminishing the computational cost of O(Nlog N) real arithmetic operations, where the hidden constant is substantially lower with respect to the previous nontrucated case. Section <ref> contains final remarks and a mention to a list of open problems.§ OPEN DOMAIN Suppose first that we have the fractional diffusion equation in the open domain,∂ u/∂ t(x,t)=κ_αΔ_β^α/2 u(x,t),with α∈ (1,2), β∈ (-1,1). For all α>0, the subsequent recurrence formula defines the Grünwald-Letnikov coefficientsg_0^α=1,g_k+1^α = -α-k/k+1 g_k^α,k≥ 0.The Grünwald-Letnikov approximationsat (x_j,t_n), for the left and right Riemann-Liouvillefractional derivativesrespectively, are obtained as_-∞^RLD_x^α u (x_j,t_n)≈1/(Δ x)^α∑_k=0^∞ g_k^α u(x_j+1-k,t_n),_x^RLD_∞^α u (x_j,t_n)≈1/(Δ x)^α∑_k=0^∞ g_k^α u(x_j-1+k,t_n).We can consider other types of approximations that would end up having different values for the coefficients. However the resulting matrix structure would remain unchanged since it is essentially decided by the nonlocal nature of the underlying continuous operator. Hence, independently of the approximation scheme as long as the grid points are equispaced, the resulting matrix structure is the same and the latter represents the main target of our present study, since the computational cost of the related algorithms strongly depends on matrix structural features. Let U_j^n represent the approximate solution of u(x_j,t_n) in the discrete domain and defineμ_α=κ_αΔ t/(Δ x)^α.The Euler explicit method to approximate the fractional diffusion equation is given byU_j^n+1 = U_j^n +μ_α (1+β/2∑_k=0^∞ g_k^α U_j+1-k^n + 1-β/2∑_k=0^∞ g_k^α U_j-1+k^n ), j ∈ℤ.The matrix form of the numerical method in the open domain takes in considerationthat the function goes to zero as we go to infinity and we haveU^n+1 = ( I +μ_α A_β) U^n,with U^n=[ U_-N^n, …, U_N^n]^T, I being the identity matrix and the matrix A_β expressed asA_β = 1+β/2 A_L+1-β/2 A_R A_L = [ [g_1^αg_0^α0…00;g_2^αg_1^αg_0^α…00;g_3^αg_2^αg_1^α…00;⋮⋮⋮⋮⋮; g_2N+1^α g_2N^α g_2N-1^α…g_2^αg_1^α ]]. A_R = A_L^T,that isA_R = [ [g_1^αg_2^αg_3^α… g_2N^α g_2N+1^α;g_0^αg_1^αg_2^α… g_2N-1^α g_2N^α;0g_0^αg_1^α…g_2N-1^α;⋮⋮⋮⋮⋮;000…g_0^αg_1^α ]]. As already mentioned, the choice of β=0 leads to the Riesz operator (fractional Laplacian in 1D) andA_0 = 1/2 A_L+1/2 A_R.Therefore, in accordance with the continuous operator, we find a symmetric matrix of the formA_0 = [ [ 2g_1^α g_0^α +g_2^αg_3^α… g_2N^α g_2N+1^α;g_2^α+g_0^α 2g_1^αg_0^α+g_2^α… g_2N-1^α g_2N^α;g_3^α g_2^α +g_0^α 2g_1^α… g_2N-2^α g_2N-1^α;⋮⋮⋮⋮⋮; g_2N+1^α g_2N^α g_2N-1^α… g_2^α +g_0^α 2g_1^α ]].§ ANTI-SYMMETRIC BOUNDARIES We move now to the problem with anti-symmetric boundaries∂ u/∂ t(x,t) = κ_αΔ_β^α/2 u(x,t),x∈, u(-x+a,t)=-u(x+a,t),x < a,u(x+b,t)=-u(-x+b,t),x > b,with an initial condition u(x,0)=u_0(x),x∈ and a equispaced discrete domain x_j= a+jΔ x, j=0,1, …, N, x_0=a, x_N=b. As a consequence we findu(-x_j+a,t) =-u(x_j+a,t),x < a,u(x_j+b,t) =-u(-x_j+b,t),x > b.When an anti-symmetricboundary condition is imposed at x=a, from (<ref>) we deduceU_j+1-k^n=-U_-j-1+k^n.The approximation of the left fractional Riemann-Liouville derivative becomes_-∞^RLD_x^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α (∑_k=0^j+1 g_k^α U_j+1-k^n +∑_k=j+2^∞ g_k^α U_j+1-k^n)=1/(Δ x)^α (∑_k=0^j+1 g_k^α U_j+1-k^n-∑_k=j+2^∞ g_k^α U_k-j-1^n ).However the fact that the second sum goes until infinity is problematic, since the anti-symmetric condition would send points to the right boundary wall and not inside the interior domain (a,b). Notice that the interior domain (a,b) is called field of values in imaging and signal processing terminology. Suppose that wedecide to stop the second sum as described in what follows, that is,_-∞^RLD_x^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α (∑_k=0^j+1 g_k^α U_j+1-k^n- ∑_k=j+2^N+j+1 g_k^α U_k-j-1^n).This decision implies that we consider a bounded wall, for the anti-symmetric boundary, of the same size as the interior domain.The truncation gives an approximation that makes sense if we take into consideration the fact that the coefficient g_k goes to zero as k tends to infinity.If theboundary is only atone side the related jumping problem between boundaries would not exist. With boundaries at both sides, we can suppose thatwe have this reflecting boundary as an intermediate boundary, where at the final boundary wall the function could be set to zero. For the anti-symmetricboundary condition at x=b, by following a similar approach,we infer_x^RLD_∞^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α(∑_k=0^N-j+1 g_k^α U_j-1+k^n+∑_k=N-j+2^∞ g_k^α U_j-1+k^n ).We also need to stop at a finite point as previously, that is_x^RLD_∞^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α (∑_k=0^N-j+1 g_k^α U_j-1+k^n+∑_k=N-j+2^2N-j+1 g_k^α U_j-1+k^n ).Owing tou(x+b) = -u(-x+b), for k≥ N-j+2, we have U_j-1+k = U_N -N+j-1+k = - U_N +N-j+1-k = -U_2N-j+1-k. Therefore _x^RLD_∞^α,ref u (x_j,t_n)≈ 1/(Δ x)^α ( ∑_k=0^N-j+1 g_k^α U_j-1+k^n- ∑_k=N-j+2^2N-j+1 g_k^α U_2N-j+1-k^n).Consider the explicit Euler schemeto approximate equation (<ref>) given by U_j^n+1 = U_j^n+μ_αδ^ anti_α,β U_j^n, withμ_α=κ_αΔ t/(Δ x)^α and δ^ anti_α,β U_j^n=1+β/2( ∑_k=0^j+1 g_k^α U_j+1-k^n-∑_k=j+2^N+j+1 g_k^α U_k-j-1^n ) +1-β/2(∑_k=0^N-j+1 g_k^α U_j-1+k^n-∑_k=N-j+2^2N-j+1 g_k^α U_2N-j+1-k^n ). In the current case the matrix form of the problem is U^n+1 = ( I +μ_α A_β^anti) U^n, with U^n=[ U_0^n, …, U_N^n]^T, I being the identity matrix and with the matrix A_β^anti being an (N+1)×(N+1) matrix given byA_β^anti =1+β/2 A_L^anti +1-β/2A_R^antiwithA_L^anti = [ [g_1^αg_0^α-g_2^α -g_3^α… -g_N^α -g_N+1^α;g_2^αg_1^α-g_3^αg_0^α-g_4^α… -g_N+1^α -g_N+2^α;g_3^αg_2^α-g_4^αg_1^α-g_5^α… -g_N+2^α -g_N+3^α;⋮⋮⋮⋮⋮;g_N+1^αg_N^α-g_N+2^αg_N-1^α-g_N+3^α… g_2^α-g_2N^α-g_0^α g_1^α-g_2N+1^α ]]andA_R^anti= [ [ g_1^α-g_2N+1^α g_2^α-g_2N^α-g_0^α g_3^α-g_2N-1^α…g_N^α-g_N+2^αg_N+1^α; g_0^α-g_2N^α g_1^α-g_2N-1^α g_2^α-g_2N-2^α…g_N-1^α-g_N+1^αg_N^α;0 -g_2N-1^α g_0^α-g_2N-2^α g_1^α-g_2N-3^α… g_N-1^α;⋮⋮⋮⋮⋮; 0 -g_N+1^α 0 -g_N^α 0 -g_N-1^α…g_0^α-g_2^αg_1^α ]].Notice that the terms appearing because of the presence of the boundary are highlighted in a different color. We also notice as the presence of the correction term 0 in the first and last equation directly originates from the fact the approximation of (<ref>) and (<ref>) starts from k=0, that is the approximation across the considered point becomes an approximation across the boundary, so requiring a proper reflection inside. When β=0 we findA_0^anti =1/2 A_L^anti +1/2A_R^antiso thatA_0^anti = S + B,where S is a symmetric matrix and B only has non-zero elementson the first and lastrows and columns in the following mannerB= 1/2[ [ 0-g_0^α 0 … 0 0; g_2^α 0 0 … 0 g_N^α; g_3^α 0 0 … g_N-1^α; ⋮ ⋮ ⋮ ⋮ ⋮; g_N-1^α 0 0 0 g_3^α; g_N^α 0 0 0 g_2^α; 0 0 0 …-g_0^α 0 ]].To sum up, the matrix structure is put in evidence in the following wayA_L^anti= T_L^anti - H̃_L^anti- R̃_L^anti, A_R^anti= T_R^anti - H̃_R^anti- R̃_R^anti,whereT_L^anti= [ 1 0 0 … 0 0; 2 1 0 … 0 0; 3 2 1 … 0 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; N+1 N N-1 … 2 1 ],T_R^anti=(T_L^anti)^T are Toeplitz matrices in lower and upper Hessenberg form, respectively, H̃_L^anti= [023…NN+1;034…N+1N+2;045…N+2N+3;⋮⋮⋮⋮⋮⋮;0N+2N+3… 2N 2N+1 ],H̃_R^anti= JH̃_L^antiJ, J being the flip matrix, are Hankel matrices apart the first and last zero columns, respectively, andR̃_L^anti= [ 0 0 0 … 0 0; 0 0 0 … 0 0; 0 0 0 … 0 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 0 … 0 0 ],R̃_R^anti= JR̃_L^antiJ = [ 0 0 0 … 0 0; 0 0 0 … 0 0; 0 0 0 … 0 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 0 … 0 0 ]are just rank one correction matrices. Thus, the matrix A_0^anti can we written asA_0^anti =1/2( T_L^anti + T_R^anti -H_L^anti -H_R^anti) + R_0^anti where H_L^anti and H_R^anti denote the full Hankel matrices linked to H̃_L^anti and H̃_L^anti, respectively, and where R_0^anti = 1/2[ 0-0 0 … 0 N+1; 2 0 0 … 0 N; 3 0 0 … 0 N-1; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; N 0 0 … 0 2; N+1 0 0 …-0 0 ]. Clearly, S_0^anti=1/2( T_L^anti + T_R^anti -H_L^anti -H_R^anti) is a symmetric matrix and, in particular, the matrix T_0^anti =1/2 (T_L^anti + T_R^anti ) is the real symmetric Toeplitz matrix whose Fourier coefficients are defined ast_0=1, t_1=(0+2)/2, t_i=i+1/2,i=2, …, N.§ ANTI-REFLECTIVE BOUNDARIES A possible proposal to restore the continuity of the function and not only of its derivative is to consider anti-reflective boundaries, as first introduced in the seminal paper <cit.> in the context of signal/image deblurring andrestoration: see also <cit.> for applications and further results. More precisely, we setu(-x+a,t)- u(a,t) = u(a,t) -u(x+a,t),x < a, u(x+b,t) - u(b,t) = u(b,t) -u(-x+b,t),x > b, Thus, by considering the same arguments as before with respect to the boundaries, the approximation of the left fractional Riemann-Liouville derivative becomes_-∞^RLD_x^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α ( ∑_k=0^j+1 g_k^α U_j+1-k^n+ ∑_k=j+2^∞ g_k^α U_j+1-k^n) =1/(Δ x)^α (∑_k=0^j+1 g_k^α U_j+1-k^n+ ∑_k=j+2^∞g_k^α ( 2U_0^n - U_k-(j+1)^n))≈ 1/(Δ x)^α ( ∑_k=0^j+1 g_k^α U_j+1-k^n- ∑_k=j+2^N+j+1 g_k^α ( 2U_0^n - U_k-(j+1)^n)) and the approximation of the right fractional Riemann-Liouville derivative becomes_x^RLD_∞^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α (∑_k=0^N-j+1 g_k^α U_j-1+k^n+ ∑_k=N-j+2^∞ g_k^αU_j-1+k^n)=1/(Δ x)^α (∑_k=0^N-j+1 g_k^α U_j-1+k^n+ ∑_k=N-j+2^∞ g_k^α ( 2 U_N^n - U_2N-j+1-k^n)) ≈ 1/(Δ x)^α ( ∑_k=0^N-j+1 g_k^α U_j-1+k^n+ ∑_k=N-j+2^2N-j+1 g_k^α ( 2 U_N^n -U_2N-j+1-k^n)). By considering again the esplicit Euler scheme, the matrix form of the problem is U^n+1 = (I +μ_αA_β^antiR) U^n, with U^n=[ U_0^n, …, U_N^n]^T, where I is the identity matrix andthe matrix A_β^antiR is an (N+1)×(N+1) matrix expressed as A_β^antiR =1+β/2A_L^antiR+1-β/2A_R^antiR. More in detail the obtained structure takes the more explicit formA_L^antiR= T_L^antiR - H̃_L^antiR- R̂_L^anti+ Ẑ_L^antiR,A_R^antiR= T_R^antiR - H̃_R^antiR- R̂_R^anti+ Ẑ_L^antiR, whereT_L^antiR= T_L^anti,T_R^antiR= T_R^anti,H̃_L^antiR= H̃_L^anti, H̃_R^antiR = H̃_R^anti,R̂_L^antiR= [ 0 0 0 … 0 0; 0 0 0 … 0 0; 0 0 0 … 0 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 0 0 … 0 -20 ],R̂_R^antiR= JR̃_L^antiRJ and Ẑ_L^antiR= [ 2H̃_L^antiR𝐞 | O^(N+1) × N ],Ẑ_R^antiR= [ O^(N+1) × N |2H̃_R^antiR𝐞 ],with 𝐞=[1, …, 1]^T and O^(N+1) × N zero matrix of dimension (N+1)× N.Thus, as in the case of anti-symmetric boundaries, for β=0 the matrix A_0^antiR can we written asA_0^antiR=1/2( T_L^antiR + T_R^antiR -H_L^antiR -H_R^antiR) + R_0^antiR = S_0^antiR + R_0^antiR where again H_L^antiR and H_R^antiR denote the full Hankel matrices linked to H̃_L^antiR and H̃_L^antiR respectively and where R_0^antiR = 1/2[ [ 20 +1-0 0 … 0 N+1 + N+1; 2 + 2 0 0 … 0 N + N; 3 + 3 0 0 … 0 N-1; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; N + N 0 0 … 02+ 2;N+1+ N+1 0 0 …-0 20+ 1 ] ], withr =2 ∑_k=r+1^N+rk,r=2, …, N+1.Notice as the matrix S_0^antiR equals the corresponding matrix S_0^anti obtained in the case of anti-symmetric boundaries.§ NUMERICAL EXPERIMENTS In the following we consider numerical experiments related the more relevant case of application of Implicit Euler scheme or Crank Nicholson scheme to the solution of (<ref>)-(<ref>). Indeed, in such case the matrix form of the problem is (I - μ_αA_β )U^n+1 =U^n, and (I - μ_α/2A_β )U^n+1 =( I + μ_α/2A_β )U^n, respectively, where A_β is as in (<ref>) or as in (<ref>). Since the implicit schemes involve the solution of a linear system, in the following we will focus on the application of Krylov methods and related effective preconditioned techniques in the case β=0.On that point of view,we start by giving numerical evidence of the spectral analysis of the involved structured matrices in the case β=0, namelyA_0^anti, T_0^anti, and A_0^antiR. First of all, we highlight as the minimal eigenvalues goes to zero asymptotically as m^-α, m being the matrix dimension, according to the order of zero of the Toeplitz generating function (see <cit.> and references therein)f_α,T_0^anti(θ)= - (f_α,T_L^anti(θ) + f_α,T_R^anti(θ))/2where f_α,T_L^anti(θ)= ∑_k=-1^∞k+1 e^ikθ = e^iθ(1+ e^i(θ+π))^αf_α,T_R^anti(θ)= f_α,T_L^anti(θ). Indeedin Table <ref> we report the minimal eigenvalue of the matrix X_m∈ℝ^m× m, with X ∈A_0^anti, T_0^anti,A_0^antiRfor increasing dimension together with the corresponding quantityγ(X_m)= log_2 ( λ_min (X_m)/λ_min (X_2m) )whose limit for the dimension m tending to infinity is exactly α. In fact, the latter is expected since the related Toeplitz matrix admits a generating function in the Wiener class which is nonnegative and with a unique zero of order α at θ=0: hence in the light of the results in <cit.> we know that the minimal eigenvalue of T_m(f_α,T_0^anti) is asymptotic to m^-α.In addition, Figure <ref>.a highlights how the whole eigenvalue distribution of the matrix A_0^anti mimics in quite good measure the quoted generating function even in the case of moderate matrix dimension (m=4000), while for the sake of completeness in Figure <ref>.b the absolute error with respect the generating function is plotted for different α values.Then, we consider the spectral analysis of the whole matrix 𝒜_0^anti=I - μ_αA_0^anti and of theToeplitz counterpart 𝒯_0^anti=I - μ_αT_0^anti. In Table <ref>the minimal and maximal eigenvalues are reported, together with the spectral condition number K_2 for increasing dimensions in the case k=1 and Δ t=Δ x and Implicit Euler schemefor different values of the parameter α. The same analysis is considered in Table <ref> with respect to the Crank Nicholson scheme.It's evident the worsening of the condition number for increasing dimension as the parameter α increases, approaching the standard second order differential problem case, due to a faster growth and decrease of the maximal and minimal eigenvalues, respectively.Then, we consider the GMRES method alone or with suitable preconditioners taken in the algebra of the circulant matrices and in the algebra of sine transforms (also called τ matrices) for the solution of a linear system with matrix 𝒜_0^anti. More precisely, we compare the case of no preconditioning, Strang Circulant preconditioning, Frobenius Optimal Circulant preconditioning, natural τ preconditioning, Frobenius Optimal τ preconditioning (see <cit.> and references therein). In Table <ref> the number of iterations required to reach convergence within a tolerance of 10^-6 is reported in the case k=1 and Δ t=Δ x and both Implicit Euler and Crank Nicholson schemes for different values of the parameter α. The constant number of iteration independent of the dimension testify the effectiveness and robustness of the proposed preconditioning strategies. Negligible dependence on the chosen scheme is also observed. Finally, we test the robustness also when increasing the value k: in Table <ref> the very same analysis is reported, giving evidence of the strong robustness of Strang Circulant and τ preconditioners. Tables <ref>, <ref>, and <ref> collect the same tests in the case of anti-reflective BCs.§ TRUNCATED APPROXIMATIONS AND THE ANTI-REFLECTIVE TRANSFORM In order to increase the computational efficiency we may consider a differently truncated version of the previous approximation of the left and right fractional Riemann-Liouville derivatives, suitable tailored so that the arising matrix belongs the anti-reflective matrix algebra <cit.> and the solution of the linear systems can be achieved within O(Nlog N) real arithmetic operations via few fast discrete sine transform of type I (for other fast transforms and their use see <cit.> and references there reported). As emphasized in <cit.> the cost of one fast discrete transform of type I is around one half of the cost of the celebrated fast Fourier transform. Furthermore the related solver is of direct type and we do not need any preconditioned Krylov iterative solver so that the overall cost is much lower when compared with the techniques proposed for the nontruncated versions. More precisely, we will consider the approximations as follows_-∞^RLD_x^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α∑_k=0^j+1 g_k^α U_j+1-k^n+1/(Δ x)^α∑_k=j+2^N g_k^α U_j+1-k^n _x^RLD_∞^α,ref u (x_j,t_n) ≈ 1/(Δ x)^α∑_k=0^N-j+1 g_k^α U_j-1+k^n+1/(Δ x)^α∑_k=N-j+2^N g_k^α U_j-1+k^n,where the number of terms approximating the derivatives in each point is constant. Thus, before imposing the boundary conditions, the banded matrix structure of the linear system is as followsA_0^anti,full U^n,full =bwith U^n,full=[ U_-N^n, …, U_-1^n |U_0^n, …, U_N^n| U_N+1^n, …, U_2N^n]^T and 2A_0^anti,full = takes the form+ [[ 0 N N-1 … 3 2 1 0 0 2 3 … N-1 N 0; 0 N N-1 … 3 2 1 0 0; ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱; ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱; ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱; 0 N N-1 … … … 1 0 0; 0 N N-1 … 3 2 1 0 0; ]] +[[ 0 0 1 2 3 … N-1 N 0; 0 0 1 2 3 … N-1 N 0; ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱; ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱; ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱ ⋱; 0 0 1 2 3 … N-1 N 0; 0 N N-1 … 3 2 0 0 1 2 3 … N-1 N 0; ]].Thus, by imposing the anti-symmetric conditions, we square the system and the matrix becomes1/2[ [ 210+2-(0+2)3-3……N-1-N-1N-N0;0+2 21-30+2-43-5 …N-1N;30+2-4⋮⋮⋮⋮N-2-NN-1;⋮⋮⋮⋮⋮⋮⋮⋮;N-1N-2-N⋮⋮⋮⋮⋮3;NN-1N-2-N………21 -30+2;0N-NN-1-N-1……3 0+2 -(0+2) 21 ] ],that is the anti-reflective matrix1/2[ [21 0 0 … … 0 0 0; 0+2 N; 3 N-1; ⋮ τ(t_0,t_1,t_2, …,t_N) ⋮; N-1 3; N 0+2; 0 0 0 … … 0 021 ] ],where τ(t_0,t_1,t_2, …,t_N) is the matrix belonging to the τ algebra (associated to the discrete sine transform of type I) with coefficients {t_i}_i=0,…,N given by the Toeplitz coefficients defined in (<ref>). In the same way the matrix obtained by imposing anti-reflective conditions shows the expression1/2[ [ 21 + z̃_1 0 0 … … 0 0 0; 0+2+ z̃_2 N+ z̃_N;3 + z̃_3N-1 + z̃_N-1; ⋮ τ(t_0,t_1,t_2, …,t_N) ⋮; N-1+ z̃_N-13 + z̃_3; N+ z̃_N0+2 + z̃_2; 0 0 0 … … 0 021+ z̃_1 ] ],wherez̃_r =2 ∑_k=r+1^Nk,r=1, …, Nare just the truncated version of the previously defined quantities z_r.Therefore, the choice between the two different types of conditions pertains the quality of the approximation and does not influence the computational efficiency. In Figures <ref> and <ref> the differences in applying the boundary conditions in full and its truncated version on A_0^anti,full is highlighted. Notice as the external yellow and blue full wings in Figure <ref> are closed inside the matrix A_0^anti with a purple overlap in the central part in the first case, while no overlap is originated in the truncated one.§ CONCLUSIONS In the present work we have combined the idea of fractional differential equation and anti-symmetric and anti-reflective boundary conditions, where the latterwere introduced in a context of signal processing and imaging for increasing the quality of the reconstruction of a blurred signal/image contaminated by noise and for reducing the overall complexity to that of few fast sine transforms i.e. to O(Nlog N) real arithmetic operations, where N is the number of pixels. The idea of ending up with matrix structures belonging to the anti-reflective algebra or well approximated by sine transform matrices seems very good also in the considered setting of nonlocal fractional problems. In fact, we should emphasize that from a matrix viewpoint this is not surprising since both operators, the fractional one and those related to the blurring in imaging are all of nonlocal type. The only relevant difference relies on the subspace of ill-conditioning which corresponds to low frequencies in the factional differential case and to the high frequencies in the case of blurring.Several numerical tests, tables, and visualizations have been provided and critically discussed, also in connection with the truncation of the two types of boundary conditions.More work remains to be done especially in connection with following items: * multidimensional domains and more involved nonlocal operators which could be treated since the τ algebra and the related sine I and anti-reflective transforms admit multilevel versions via tensorizations <cit.>;* non-equispaced grids or variable coefficients which could be treated thanks to the theory of generalized locally Toeplitz matrix-sequences <cit.>;* a numerical and theoretical comparison in terms of precision between the truncated and nontruncated versions of the considered boundary conditions, even if few numerics suggest that the difference is not relevant.99paperAR A. Aricò, M. Donatelli, S. Serra-Capizzano, The anti-reflective algebra: structural and computational analysis with application to image deblurring and denoising. Calcolo 45-3 (2008), 149–175.paperART A. Aricò, M. Donatelli, J. Nagy, S. Serra-Capizzano, The anti-reflective transform and regularization by filtering. Numerical linear algebra in signals, systems and control, 1-21, Lect. Notes Electr. Eng., 80, Springer, Dordrecht, 2011. BC D. Bini, M. Capovani, Spectral and computational properties of band symmetric Toeplitz matrices, Linear Algebra Appl. 52/53 (1983), 99–126.BG-extr A. Böttcher, S. Grudsky, On the condition numbers of large semi-definite Toeplitz matrices, Linear Algebra Appl. 279 (1998), no. 1-3, 285–301.CN R. Chan, M. Ng,Conjugate gradient methods for Toeplitz systems, SIAM Rev. 38 (1996), no. 3, 427–482.DiB-S F. Di Benedetto, S. Serra-Capizzano, Optimal multilevel matrix algebra operators, Linear and Multilinear Algebra 48 (2000), no. 1, 35–66. dip2023 S. Dipierro, J. Thompson, E. Valdinoci, On the Harnack inequality for antisymmetric s-harmonic functions. J. Funct. Anal., 285 (2023), paper 109917.dip2023p2 S. Dipierro, G. Poggesi, J. Thompson, E. Valdinoci, On the Harnack inequality for antisymmetric s-harmonic functions. arXiv2023imaging-AR-2 M. Donatelli, C. Estatico, A. Martinelli,S. Serra-Capizzano, Improved image deblurring with anti-reflective boundary conditions and re-blurring. Inverse Problems 22-6 (2006),2035–2053. DMS M. Donatelli, M. Mazza, S. Serra-Capizzano,Spectral analysis and structure preserving preconditioners for fractional diffusion equations, J. Comput. Phys. 307 (2016), 262–279. imaging-AR-1 M. Donatelli, S. Serra-Capizzano,Anti-reflective boundary conditions and re-blurring. Inverse Problems 21-1 (2005),169-182. GLT-BookII C. Garoni,S. Serra-Capizzano, Generalized locally Toeplitz sequences: theory and applications. Vol. II, Springer, Cham, 2018.GLT-exposition-eng C. Garoni, H. Speleers, S.-E. Ekström, S. Serra-Capizzano, T.J.R. Hughes, Symbol-based analysis of finite element and isogeometric B-spline discretizations of eigenvalue problems: exposition and review, Arch. Comput. Methods Eng. 26 (2019), no. 5, 1639–1690.KO T. Kailath, V. Olshevsky, Displacement structure approach to discrete-trigonometric-transform based preconditioners of G. Strang type and of T. Chan type, SIAM J. Matrix Anal. Appl. 26 (2005), no. 3, 706–734.lis2020 A. Lischke, G. Pang, M. Gulian, F. Song, C. Glusa, X. Zheng, Z. Mao, W. Cai, M.M. Meerschaert, M. Ainsworth, G.E. Karniadakis, What is the fractional Laplacian? A comparative review with new results. J. Comput. Phys., 404 (2020), paper 109009.S-extr2 S. Serra-Capizzano, On the extreme spectral properties of Toeplitz matrices generated by L^1 functions with several minima/maxima, BIT 36 (1996), no. 1, 135–142.S-extr1 S. Serra-Capizzano, On the extreme eigenvalues of Hermitian (block) Toeplitz matrices, Linear Algebra Appl. 270 (1998), 109–129. tau-theory1 S. Serra-Capizzano, Superlinear PCG methods for symmetric Toeplitz systems, Math. Comp. 68 (1999), no. 226, 793–803.tau-theory2 S. Serra-Capizzano, Toeplitz preconditioners constructed from linear approximation processes. SIAM J. Matrix Anal. Appl. 20 (1999), no. 2, 446–465. S-AR-proposal S. Serra-Capizzano, A note on antireflective boundary conditions and fast deblurring models, SIAM J. Sci. Comput., 25-4 (2004),1307-1325. Van C. Van Loan, Computational frameworks for the fast Fourier transform, Frontiers in Applied Mathematics, 10. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1992.zhu2022 R. Zhuo, C. Li, Classification of anti-symmetric solutions to nonlinear fractional Laplace equations. Calc. Var., 61 (2022), paper 17 | http://arxiv.org/abs/2312.16485v1 | {
"authors": [
"Ercília Sousa",
"Cristina Tablino-Possio",
"Rolf Krause",
"Stefano Serra-Capizzano"
],
"categories": [
"math.NA",
"cs.NA"
],
"primary_category": "math.NA",
"published": "20231227091900",
"title": "Nonlocal problems with anti-symmetric and anti-reflective boundary conditions: a computational analysis and numerical comparisons"
} |
rmtheoremTheorem[section] assumptionAssumption[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary definitionDefinition[section] *remarkRemark exampleExample[section] solution ruledlabelfont=normalfont,labelsep=colon,strut=off basicstyle=, numbers=left,numberstyle=,xleftmargin=2em, aboveskip=0pt,belowskip=0pt, showstringspaces=false,tabsize=2,breaklines=true ruled listingtblst listingListing /TemplateVersion (2024.1) National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China {zhourz, gaocx}@lamda.nju.edu.cn, {zzzhang, yuy}@nju.edu.cn AffiliationAffiliation Line [email protected] 1Affiliation 12Affiliation [email protected], [email protected], [email protected] Publication Title — Multiple Authors First Author Name1,2, Second Author Name2, Third Author Name1Received ...; accepted... ==================================================================================== Generalization and sample efficiency have been long-standing issues concerning reinforcement learning, and thus the field of Offline Meta-Reinforcement Learning (OMRL) has gained increasing attention due to its potential of solving a wide range of problems with static and limited offline data. Existing OMRL methods often assume sufficient training tasks and data coverage to apply contrastive learning to extract task representations. However, such assumptions are not applicable in several real-world applications and thus undermine the generalization ability of the representations. In this paper, we consider OMRL with two types of data limitations: limited training tasks and limited behavior diversity and propose a novel algorithm called GENTLE for learning generalizable task representations in the face of data limitations. GENTLE employs Task Auto-Encoder (TAE), which is an encoder-decoder architecture to extract the characteristics of the tasks. Unlike existing methods, TAE is optimized solely by reconstruction of the state transition and reward, which captures the generative structure of the task models and produces generalizable representations when training tasks are limited. To alleviate the effect of limited behavior diversity, we consistently construct pseudo-transitions to align the data distribution used to train TAE with the data distribution encountered during testing. Empirically, GENTLE significantly outperforms existing OMRL methods on both in-distribution tasks and out-of-distribution tasks across both the given-context protocol and the one-shot protocol.§ INTRODUCTIONDespite the success of Reinforcement Learning (RL) in scenarios where online interaction is consistently available, RL is hampered from real-world applications such as healthcare and robotics controlling due to its sample complexity <cit.> and inferior generalization ability <cit.>. The past decade witnessed tremendous effort from researchers to pave the path for RL toward real-world applications. For example, offline RL <cit.>, which optimizes the policies with a pre-collected and static dataset, provides a solution to relieving RL from costly online interactions, whereas meta-RL <cit.>, which involves training policies over a wide range of tasks, significantly enhances the generalization ability of the learned policies. Offline Meta-Reinforcement Learning (OMRL) <cit.>, as an intersection of offline RL and meta-RL, is promising to combine the good of both worlds. In OMRL, we are provided with datasets collected in various tasks which share some similarity in the underlying structures in dynamics or reward mechanisms, and aim to optimize the meta-policy. The meta-policy is later tested in tasks drawn from the same task distribution. Previous related methods <cit.> often interpret the OMRL challenge as task representation learning and meta-policy optimization. The former step aims to obtain indicative task representations from the dataset, while the latter optimizes a meta-policy on top of the learned representation. However, existing methods often assume a sufficient number of training tasks as well as sufficient diversity of behavior policy that collects the datasets, which is not realistic in real-world applications. We find that when the assumptions are not satisfied, the representations tend to overfit and fail to generalize on unseen testing tasks. In light of this, we propose a new approach to Generalizable Task representations Learning (GENTLE) to enable effective task recognition in the face of limitations in training task quantity and behavior diversity. GENTLE follows the existing paradigm of OMRL and consists of two interleaving optimization stages: (1) task representation learning and (2) offline meta-policy optimization on top of the learned representations. For (1), we introduce a novel structure, Task Auto-Encoder (TAE) to extract representations from the context information. TAE is optimized to reconstruct the state transition and rewards on the probing data rather than contrastive loss, which models the generative structure of the environment and prevents the encoder from overfitting to miscellaneous features when the number of training tasks is limited. To alleviate TAE's training from overfitting to the behavior policy distribution, we augment the training data via policy, dynamics, and reward relabeling, forcing TAE to learn to exploit the difference in dynamics and rewards rather than input data distributions. For (2), we adopt TD3+BC for its simplicity to optimize a meta-policy with task representations predicted by the TAE. For evaluations, we compare GENTLE against other baseline algorithms in a set of continuous control tasks with two types of evaluation protocols: given-context protocol where the context is collected by an ad-hoc expert policy in the target environment, and one-shot protocol where the context is collected by the meta-policy. Experimental results demonstrate the superiority of GENTLE over the baseline methods, and the ablation study also discloses the necessity of each component of GENTLE. § RELATED WORK§.§.§ Offline Meta-Reinforcement Learning. Generalization is a known issue about RL agents <cit.>, and thus meta-RL is proposed to enhance the generalization ability of RL agents. Current meta-RL research can be categorized into two types: gradient-based approaches <cit.>, which focus on fast adaptation to new tasks via few-shot gradient descent, and context-based approaches <cit.>, which formalize the meta-RL tasks as contextual Markov Decision Processes (MDPs) and learn to encode task representations from histories. The combination of meta-RL and offline setting leads to Offline Meta-Reinforcement Learning (OMRL), a framework where only static task datasets are available to learn a meta-policy. Most of the previous OMRL methods follow the context-based approach <cit.>. Overall, the workflow of these methods can break down into two procedures. The first is to learn a task representation encoder with the offline dataset and augment the states with the learned representations, while the second step is to optimize the meta-policy with offline RL algorithms. MACAW <cit.>, on the other hand, follows the gradient-based approach and extends MAML <cit.> to the offline setting. Our method falls in the first category, with additional considerations for the data limitations. Similar to our motivations, MBML <cit.>, BOReL <cit.>, and CORRO <cit.> also identify the impact of behavior policy on task identification,and alleviate this issue either by reward relabeling or generative relabeling on offline datasets. MIER <cit.> and SMAC <cit.>, on the other hand, focus on the meta-testing stage and mitigate this issue by relabeling the context collected online.§.§.§ Task Representation Learning. Successful meta-RL agent relies on the learned task representations to make adaptive decisions in different tasks. On how to encode the context and derive task representations, various methods differ in their learning objectives. Earlier methods, such as RL^2 <cit.> and PEARL <cit.>, apply the same RL objective for the representation encoder. Specifically, RL^2 passes the gradient of the encoder through the representation and optimizes the encoder in an end-to-end fashion, while PEARL trains the encoder via critic loss combined with an additional information bottleneck term. Alternative approaches, like ESCP <cit.>, employ the objective of maximizing relational matrix determinant of latent representations. In offline RL, a predominant approach to task representation learning is contrastive-style training <cit.>. These methods take advantage of the static datasets, construct positive pairs and negative pairs via relabeling or generative augmenting, and afterward apply contrastive objectives to optimize the encoder. In this paper, we propose to extract representations via reconstruction, thus utilizing the generative structure of the underlying dynamics and facilitating the generalization of the representation. This idea has also been explored in past literature <cit.>. However, we apply this idea in the offline setting with off-policy data and characterize it theoretically. § PRELIMINARIES §.§ Problem FormulationThe RL problem can be formulated as a Markov Decision Process (MDP), which can be characterized by a tuple M=(𝒮,𝒜,T,R,μ_0,γ), where 𝒮 is the state space, 𝒜 is the action space, T(s'|s, a) is the transition function, R(s, a) is the reward function, μ_0(s) is the initial state distribution, and γ∈[0,1] is the discount factor. The policy π(a|s) is a distribution over actions. The agent's goal is to find the optimal policy that maximizes the expected cumulative reward (a.k.a. return) max_πη(M,π)= 𝔼_π, M[∑_t=0^∞γ^t R(s_t,a_t)], where the expectation is taken over the trajectory distribution which is induced by π in M. The Q-function is defined as the expected return starting from state s, taking action a, andthereafter following policy π: Q_π(s,a)=𝔼_π, M[∑_t'=t^∞γ^t'-t R(s_t',a_t')|s_t=s,a_t=a].In Offline Meta-Reinforcement Learning (OMRL), we consider a set of tasks where each task is an MDP M_i=(𝒮,𝒜,T_i,R_i,μ_0,γ) and sampled from a task distribution M_i∼ P(ℳ). We assume the tasks only differ in transition functions and reward functions, and abbreviate them as M=(T, R). We will use the term model to refer to M hereafter. During offline meta-training, we are given N training tasks {M_i}_i=1^N sampled from P(ℳ) and the corresponding offline datasets {D_i}_i=1^N generated by behavior policies. Using the fixed offline datasets, the algorithm needs to train a meta-policy π_meta. During meta-testing, given a testing task M∼ P(ℳ), the agent first needs to identify the environment with context information B^c before evaluation. Finally, the goal of meta-RL is to find the optimal meta-policy that maximizes the expected return over the task distribution:max_π_metaη(π_meta)= 𝔼_M∼ P(ℳ)[η(M,π_meta)]. §.§ TD3+BCOffline RL trains a policy solely on a fixed dataset D. Due to the mismatch between the distributions of the dataset and the current policy, the agent tends to falsely evaluate the value of out-of-distribution actions and thus misleads the policy optimization <cit.>. To address the problem, TD3+BC <cit.> adds a regularization term to the objective in TD3 <cit.> to constrain the policy around the dataset:max_π𝔼_(s,a)∼ D[λ Q(s, π(s)) - (π(s)-a)^2],where D is the offline dataset and λ is the coefficient balancing between the TD3's objective and the regularization term. We choose TD3+BC as our backbone offline RL algorithm due to its simplicity and effectiveness. And for generality, we use the stochastic form of policy to represent the meta-policy as π(·|s,z) hereafter.§ METHOD In this section, we will elaborate on the core ingredients of our GENTLE method, designed to learn generalizable task representations from offline datasets.§.§ Task Auto-EncoderWe begin with our TAE, which applies an encoder-decoder architecture to learn task representations. First, we define x∈𝒮×𝒜 as the probing data and ρ(x) as its distribution. We additionally assume that the distribution of probing data is task-agnostic:(Independency of Probing Data) The distribution of probing data is independent of the model, i.e., p(x)=p(x|M) and p(M|x)=p(M).For each entry of the probing data x=(s, a), we first sample a model M=(T_M, R_M) from the task distribution, and continue to sample a state transition s'∼ T_M(·|s, a) and the reward r=R_M(s, a) to assign the label of x as y=(s', r). We use x_1:n to represent n i.i.d. samples from ρ(x) and y_1:n to represent the corresponding labels of x_1:n. Throughout this paper, we make a further assumption that the model of the environment is deterministic, i.e.: (Deterministic Model) For input x=(s, a)∈𝒮×𝒜 and model M=(T_M, R_M)∈ℳ, p(y|x, M)=δ(y=y^M). Here y^M=M(x) and M( x) (T_M(s, a), R_M(s, a)).With all these prepared, we now describe TAE's architecture. In TAE, the encoder q_θ(z|x_1:n, y_1:n) takes a batch of probing data x_1:n and their labels y_1:n as inputs, and outputs a distribution of the representation vector. The decoder q_ψ(ŷ_k|x_k, z), on the other hand, takes the predicted representation and one of the probing data x_k as input, and finally outputs the distribution of the predicted label. We train our TAE by maximizing the log-likelihood of the ground-truth label. Specifically, we maximize the following objective in terms of the parameters θ and ψ:𝒥(θ, ψ) = 𝔼_x_1:n, M, z[∑_k=1^nlog q_ψ(y^M_k|z, x_k)],where x_1:n∼ρ(x), M∼ P(ℳ), z∼ q_θ(z|x_1:n, y^M_1:n). Intuitively, the encoder takes in the labeled data and predicts the task representation z, while the decoder seeks to reconstruct the ground-truth label via the maximum log-likelihood over the batch of probing data. We now provide a theoretical characterization of the training process.Let x_1:n denote i.i.d. probing data sampled from distribution ρ(x) andthe probing data is labeled with sampled models from P(ℳ) to construct y_1:n. With Assumptions <ref> and <ref>, optimizing 𝒥(θ, ψ) in terms of the representation encoder q_θ and the decoder q_ψ corresponds to optimizing a lower bound of the mutual information between the task representation and the model I(M; z): I(M; z)≥ I(M; y_1:n|x_1:n) + 𝒥(θ, ψ). The proof is deferred to the appendix [<https://www.lamda.nju.edu.cn/zhourz/AAAI24-supp.pdf>]. In Theorem <ref>, we show that we can lower-bound the original mutual information I(M; z) via two terms. The first term, I(M; y_1:n|x_1:n), measures how discriminative the probing data x_1:n is in terms of models, while the second term is precisely the training objective of TAE which is a reconstruction-oriented objective. Thus, optimizing Equation (<ref>) corresponds to optimizing the lower bound of the mutual information between the extracted representation z and the tasks, which justifies the effectiveness of our objective. It is noteworthy that optimizing mutual information is intractable in practice. A prior method, CORRO <cit.>, derives a tractable lower bound via InfoNCE <cit.>, which employs a generative model to construct negative samples conditioned on the data within the task. Such a method focuses on all of the discriminative aspects of the input data, without consideration for the generative structure shared by different models. In contrast, our approach explicitly learns a decoder to account for this, thus enabling the encoder to closely approximate the intrinsic characteristics of the task.§.§ Practical Implementation of TAESection <ref> presents a principal framework of TAE, while in this section, we will elaborate on the practical implementation of TAE, as illustrated in Figure <ref>. The encoder q_θ is implemented as a deterministic mapping from a batch of probing data to a representation vector z∈ℝ^m. This module consists of a feature transformation network q̂_θ and an average-pooling layer. For each pair of the input probing data (x_k, y_k), q̂_θ processes and projects the input into an intermediate embedding vector z_k∈ℝ^m, and the average-pooling layer aggregates the embeddings of each pair by taking the average element-wisely to obtain the final embedding z∈ℝ^m. Considering that the models of the environment are deterministic, we also implement the decoder q_ψ as a deterministic mapping. However, when q_ψ is deterministic, the probability for the predicted label used in Equation (<ref>) is undefined. To tractably estimate the log probabilities, we follow the common approach of using L2 distances as an approximation for the log probability <cit.>. The original objective 𝒥(θ, ψ) can thus be equivalently transformed into:𝒥̂(θ, ψ) = 𝔼_x_1:n, M[∑_k=1^n (y_k^M - q_ψ(q_θ(x_1:n, y_1:n), x_k))^2],where x_1:n∼ρ(x), M∼ P(ℳ). Finally, we use 𝒥̂(θ, ψ) as the training objective of the TAE. §.§ Constructing the Probing DistributionThe training of TAE is also affected by the distribution of the probing data, whose distribution ρ(x) is desired to satisfy certain requirements. Thus in this section, we investigate how to construct the probing distribution. Generally, we expect ρ(x) to satisfy the following properties:1) Independent of models. This is required by Assumption <ref>, which states that the probing data should be sampled from an invariant distribution regardless of models M. 2) Consistent for training and evaluation. This property requires that ρ(x) should resemble the distribution which the meta-policy may encounter during evaluation. We can perceive the probing distribution ρ(x) as some attention over all possible aspects of the models, and the extracted representation z is the most discriminative over ρ(x). Given the above two desiderata, we propose to construct the training data of TAE by policy-relabeling, dynamics-relabeling, and reward-relabeling jointly. Suppose the offline datasets are represented by {D_i}_i=1^N, we first pretrain an estimated model M_i for each task M_i via supervised learning. At the beginning of each iteration, for each task M_i, we randomly pick K_1 states from D_i and K_2 states from other datasets ∪_j D_j ∖ D_i respectively. For each state ŝ, we first label its action by sampling â from the update-to-date meta-policy parameterized by ϕ: â∼π_ϕ(a|s,z_i). The state transition ŝ' and the reward r̂ are further predicted by the pre-trained model M_i. The constructed tuple ⟨ŝ, â, ŝ', r̂⟩ is thus an augmentation sample used to train TAE. The pseudo-code for the augmentation process is listed in Algorithm <ref>. By randomly sampling states across all of the datasets and relabeling the actions with the same meta-policy, we align the probing distribution ρ(x) for each task so that the training procedure approximately satisfies property 1). Note that in the actual implementations, we are sampling from the ego dataset and other datasets with a ratio of K_1:K_2. Theoretically, the ratio should be set to 1:N-1 precisely. However, in the practical implementation, we prefer a little biased ratio towards the ego dataset. This is because the estimated model M_i may produce erroneous predictions on the states from other datasets, so we choose to strike a balance with such a biased ratio. For property 2), this is ensured by relabeling actions with the meta-policy. More details about the experiments can be found in the appendix.§.§ Overall Framework of GENTLEWe summarize the overall meta-training framework of GENTLE in Algorithm <ref>. At the beginning of each iteration, we first augment context data with the update-to-date meta-policy and the pre-trained models to construct the probing data. The probing data is then used to optimize the TAE as well as to compute the task representation. After detaching the gradient w.r.t. the encoder, the representation will be concatenated to raw observations for downstream offline policy optimization, which is implemented as TD3+BC. GENTLE iterates between the construction of probing data and the optimization of the meta policy until convergence. At test time, we test GENTLE with both given-context protocol and one-shot protocol. The former assumes that the meta-policy is given access to a dataset collected in testing task to serve as context B^c. In the latter, we first collect a trajectory as context B^c with z_prior sampled from a prior distribution, and calculate task representation z=q_θ(B^c). Then the meta-policy is evaluated by conditioning on the calculated representation. In the practical implementation, we scale the range of z to (-1,1) and set z_ prior to all zeros. § EXPERIMENTS In this section, we carry out extensive experiments to compare GENTLE against other OMRL baseline methods and provide an ablation study on the design choices of GENTLE. We also illustrate the learned task representations as a qualitative assessment of the proposed method. We release our code at <https://github.com/LAMDA-RL/GENTLE>. §.§ Baselines and BenchmarksFollowing the experimental setup in prior studies <cit.>, we construct a 2D navigation environment and several multi-task MuJoCo <cit.> environments to evaluate our algorithm.To comply with the data limitation on the number of training tasks, we sample 10 training tasks and 10 testing tasks for each environment (except for Cheetah-Dir which only has two tasks). For offline dataset generation, we train a SAC <cit.> agent to expert level on each task and then collect trajectories as the offline datasets to simulate the data limitation on behavior diversity. To evaluate the performance of GENTLE, We compare it with the following OMRL methods: FOCAL <cit.> employs distance metric learning to train the context encoder. CORRO <cit.> employs contrastive learning to train the context encoder using InfoNCE loss. BOReL <cit.> employs a variational autoencoder to learn task embeddings by maximizing of the evidence lower bound.Note that in the original implementation, FOCAL uses BRAC <cit.>, CORRO and BOReL use SAC as their backbone offline RL algorithms, while we use TD3+BC as the backbone algorithm. To provide a fair comparison, we also implement them with TD3+BC, and conduct the experiments with both the original baselines and the re-implemented baselines. We place the results of the original baselines in the appendix . Finally, we use a variant of BOReL without oracle reward relabeling in the experiments for a fair comparison. §.§ Main ResultsWe evaluate GENTLE alongside other baselines across the given-context protocol and the one-shot protocol. As shown in Table <ref>, it is evident that GENTLE significantly outperforms other baselines in almost all scenarios.When the context is given, all considered methods exhibit reasonable performance. However, it is noteworthy that GENTLE slightly surpasses the performance of the baseline methods, which indicates the efficacy of the representations extracted by GENTLE. We witness a sharp drop in the baseline methods such as FOCAL when switching to one-shot protocol, which is primarily attributed to the context distribution shift between training and online adaptation. In contrast, GENTLE remarkably sustains a high-performing policy under the one-shot protocol, thereby showcasing the remarkable generalization capability of GENTLE's representation encoder during online adaptation. §.§ Illustration of the RepresentationsWe dive deeper to examine the learned representations of each algorithm. To illustrate the quality of the learned representations, we use the meta-policy to collect context data in the testing tasks and employ the learned encoder to predict the task representations. For each task, we obtain a total number of 400 representation vectors and employ t-SNE <cit.> to project them onto a two-dimensional plane. The results, as depicted in Figure <ref>, reveal that the representations predicted by GENTLE naturally form distinct clusters based on their task IDs, signifying the proficiency of GENTLE's encoder in deriving effective task representation even for the testing tasks. On the contrary, FOCAL, CORRO, and BOReL fail to distinguish the representations with online context. The predicted task representations are intertwined and lack clear distinctions in the projected space.§.§ Ablation Study§.§.§ Ablation on algorithm components.The core ingredients and innovations of GENTLE are 1) the TAE structure trained by reconstruction of rewards and transitions, and 2) the construction of probing data used to train TAE. We investigate the necessity of these components.We introduce several variants of GENTLE. Specifically, we replace TAE's objective with the contrastive-style objective used in FOCAL, and term this variant as GENTLE-Contrastive. We create another variant, GENTLE without Relabel, by skipping the relabeling process and training TAE directly with the offline datasets. The last variant, GENTLE without PolicyRelabel, skips policy-relabeling and uses the dataset action for relabeling. The results under one-shot protocol are listed in Table <ref>.By comparing GENTLE and GENTLE-Contrastive, we find that the reconstruction objective does offer benefits over the contrastive-style objective. The variant without relabeling shows severely degenerated performance in certain tasks, particularly in Ant-Dir and Cheetah-Dir. Without the process of relabeling, the context encoder tends to overfit to training data distribution. Finally, although the variant without policy-relabeling shows favorable results on all tasks, its performance still lags behind GENTLE. This exemplifies the importance of the consistency property for the probing data distribution. §.§.§ Ablation on sampling ratio.To construct the probing data, we sample them from the ego dataset and other datasets with a ratio of K_1:K_2. We investigate the influence of this ratio. We conduct experiments on Ant-Dir, with a series of sampling ratios: 1:0, 1:1, 1:3, 1:6, 1:9, 1:12, 1:15. Specifically, a ratio of 1:0 signifies exclusive sampling from the ego task dataset, while a ratio of 1:9 implies comprehensive sampling from all other task datasets. And for ratios 1:12 and 1:15, we downsample the ego task dataset. The results are depicted in Figure <ref>. The ratio of 1:0 exhibits the poorest performance, attributed to its reliance solely on the ego dataset which fails to ensure property 1) in Section <ref>. Increasing the sampled number of other task datasets leads to improved performance. Notably, larger ratios yield similar or slightly worse performance, and ratios 1:12 and 1:15 result in instability and performance drop, which, as we stated before, can be attributed to the estimation error of the pre-trained dynamics models. Besides, a larger ratio also requires more computation. Based on the above considerations, we opt for a balanced ratio of 1:3 in all of the experiments. §.§.§ Ablation on training tasks and behavior diversity.GENTLE is proposed to tackle data limitations on training tasks and behavior diversity. To inspect how GENTLE adapts to the former, we vary the number of training tasks between 4-10 while leaving testing tasks unchanged. As shown in Figure <ref>, a small number of training tasks significantly diminishes GENTLE's generalization over testing tasks. With an increase in the number of training tasks, GENTLE exhibits enhanced generalization performance. Lastly, we illustrate the capability of handling limited behavior diversity by inspecting how algorithms adapt to improved diversity. We use a medium-level policy to collect medium context data, and use 5 logged checkpoints of policy to collect mixed context data. The medium and mixed data are only used to train the context encoder, while the policy is still optimized with expert datasets. As shown in Table <ref>, FOCAL's performance witnessed significant improvement as the behavior diversity improves (from expert to mixed), while GENTLE remains approximately the same since the data-relabeling process already enriches the behavior diversity and aligns the distribution even with the expert context.§ CONCLUSION AND FUTURE WORKIn this paper, we propose an innovative OMRL algorithm called GENTLE. We adopt a novel structure of Task Auto-Encoder (TAE), which incorporates an encoder-decoder framework trained by reconstruction of rewards and transitions. We also employ relabeling to construct pseudo-transitions, which aligns the TAE's training data distribution with the testing data distribution during meta-adaptation. Our experimental results show GENTLE's superior performance in diverse environments and tasks. The ablation studies emphasize the necessity of each component of GENTLE.Notwithstanding the achievements, our work leaves some aspects unaddressed. We lack provisions for sparse reward settings, and do not tackle the development of exploration policy during meta-testing. We leave these for future work.§ ACKNOWLEDGEMENTSThis work is supported by the National Key R&D Program of China (2022ZD0114804), the National Science Foundation of China (62276126, 62250069), and the Fundamental Research Funds for the Central Universities (14380010). § PROOFIn this section, we provide the proof for the main theorem. We first restate the following assumptions. (Independency of Probing Data) The distribution of probing data is independent of the model, i.e. p(x)=p(x|M) and p(M|x)=p(M).(Deterministic Model) For input x∈𝒮×𝒜 and model M∈ℳ, p(y|x, M)=δ(y=y^M) where y^M=M(x).Before proving the theorem, we introduce the following lemma.(Data Processing Inequality) Assume three random variables from the Markov Chain X→ Y → Z, then I(X; Z)=I(X; Y)- I(X; Y|Z).Applying the chain rule of mutual information, we haveI(X;Z)= I(X; Y, Z) - I(X; Y|Z)= I(X; Z|Y) + I(X; Y) - I(X; Y|Z)=I(X; Y) - I(X; Y|Z),where the last equation holds for I(X; Z|Y)=0 because of the Markov property.Then we move on to prove our main theorem. (Theorem <ref> restated) Let x_1:n denote i.i.d. probing data sampled from distribution ρ(x) and the probing data with sampled models from p(M) is labeled to construct y_1:n. With Assumptions (<ref>) and (<ref>), optimizing 𝒥(θ, ψ) in terms of the representation encoder q_θ(x_1:n, y_1:n) corresponds to optimizing the lower bound of I(M; z):I(M; z)≥ I(M; y_1:n|x_1:n) + 𝒥(θ, ψ). To see this, we first note that probing the model M with data x_1:n can be deemed as an implicit data-processing of the model, which implies the dependency graph in Figure <ref>. ThenI(M; z) = H(M) - H(M|z)= H(M|x_1:n) - H(M|z, x_1:n) = I(M; z|x_1:n)= I(M; y_1:n|x_1:n) - I(M; y_1:n|z, x_1:n)=I(M; y_1:n|x_1:n) - H(y_1:n|z, x_1:n)+H(y_1:n|M, z, x_1:n)=I(M; y_1:n|x_1:n) - H(y_1:n|z, x_1:n),where the second equation holds due to Assumption (<ref>), and the last equation holds due to Assumption (<ref>). Expanding H(y_1:n|z, x_1:n) leads to-H(y_1:n|z, x_1:n)=𝔼_x_1:n𝔼_p(z, y_1:n|x_1:n)[log p(y_1:n|z, x_1:n)].However, it is intractable to estimate the posterior p(y^M_1:n|z, x_1:n), so we opt for a variational distribution q_ψ(y_1:n^M|z, x_1:n), and we have-H(y_1:n|z, x_1:n)=𝔼_x_1:n𝔼_p(z, y_1:n|x_1:n)[logp(y_1:n|z, x_1:n)/q_ψ(y_1:n|z, x_1:n)+log q_ψ(y_1:n|z, x_1:n)]= 𝔼_x_1:n, z[KL(pq_ψ)]+ 𝔼_x_1:n𝔼_p(z, y_1:n|x_1:n)[log q_ψ(y_1:n|z, x_1:n)]≥𝔼_x_1:n𝔼_p(z, y_1:n|x_1:n)[log q_ψ(y_1:n|z, x_1:n)],where the inequality is due to the non-negativity of KL divergence. Finally we use the Assumption <ref> again to avoid the integral over y_1:n, and come to𝔼_x_1:n𝔼_p(z, y_1:n|x_1:n)[log q_ψ(y_1:n|z, x_1:n)]=𝔼_x_1:n[∫ p(z, y_1:n)log q_ψ(y_1:n|z, x_1:n)dzdy_1:n]=𝔼_x_1:n𝔼_M𝔼_z∼ q_θ(z|x_1:n, y^M_1:n)[log q_ψ(y^M_1:n|z, x_1:n)]=𝔼_x_1:n𝔼_M𝔼_z∼ q_θ(z|x_1:n, y^M_1:n)[∑_k=1^nlog q_ψ(y^M_k|z, x_k)]= 𝒥(θ, ψ),where the penultimate equation is due to the i.i.d. property of x_1:n.Summarizing the above proof, we haveI(M; z)≥ I(M; y_1:n|x_1:n) + 𝒥(θ, ψ).Thus with a fixed probing distribution, maximizing 𝒥(θ, ψ) corresponds to maximizing the lower bound of I(M; z).§ EXPERIMENTAL DETAILS §.§ Meta-EnvironmentsHere, we present the construction details of the meta-environments in the benchmark.Point-Robot. Starting from the initial position randomly generated in the state space, the agent needs to navigate to the goal position. The goal position is uniformly sampled from U[-1,1]× U[-1,1]. The reward function is dependent on the distance from the current position to the goal position.Cheetah-Dir. It is a Half-cheetah environment with a target direction. The direction is either forward or backward. The reward function is dependent on the walking direction and the walking velocity. Cheetah-Vel. It is a Half-cheetah environment with a target velocity which is uniformly sampled from U[0,6]. The reward function is dependent on how close the current velocity is to the goal velocity. Ant-Dir. It is an Ant environment with a target direction which is uniformly sampled from U[0, 2π]. The reward function is dependent on the velocity on the target direction.Hopper-Params. It is a Hopper environment with different parameters. Transition function varies in randomized task-specific parameters like body mass, inertia, damping, and friction coefficients. Each parameter is sampled by multiplying the default value with 1.5^μ,μ∼ U[-3,3]. The reward function is dependent on the forward velocity.Walker-Params. It is a Walker environment with different parameters. Transition function varies in randomized task-specific parameters like body mass, inertia, damping, and friction coefficients. Each parameter is sampled by multiplying the default value with 1.5^μ,μ∼ U[-3,3]. The reward function is dependent on the forward velocity.For all the environments except for Cheetah-Dir, 10 training tasks and 10 testing tasks are randomly sampled. For Cheetah-Dir with only 2 tasks, we test the algorithm only on training tasks. For offline dataset generation, we train a SAC agent to the expert level on every single task, and roll out the expert-level agent to collect some trajectories as the offline datasets. The configurations in the data collection phase are listed in Table <ref>. §.§ Baseline AlgorithmsWe re-implement FOCAL, CORRO, and BOReL with TD3+BC. TD3+BC updates policy with a behavior cloning term: max_π𝔼_(s,a)∼ D[λ Q(s,π(s)) - (π(s)-a)^2],where λ = α/1/N∑_(s,a)|Q(s,a)|,which balances the TD3's objective and the regularization term. Following the original paper, we set the hyperparameter α to 2.5 in all the environments, which shows reasonable performance in all baseline algorithms. We conduct the main experiments with the original baselines, in which FOCAL uses BRAC, and CORRO and BOReL use SAC as their backbone offline RL algorithms. We run FOCAL, CORRO, and BOReL with the open-source implementations for a fair comparison, and the results are listed in Table <ref>. The performances of the original algorithms exhibit varying degrees of decline compared to the performances of the TD3+BC version. Compared with the TD3+BC version, the performance of the original algorithm shows different degrees of degradation, which demonstrates that TD3+BC is more suitable for offline policy optimization than BRAC and SAC. §.§ Hyperparameter SettingsTo perform data relabeling for our algorithm, we train dynamic models {M_i}_i=1^N on each offline dataset {D_i}_i=1^N separately.We represent the model as a probabilistic neural network, which consists of 4 fully connected layers with 256 hidden units (expect for Point-Robot which consists of 3 forward layers with 64 hidden units). We train 7 such networks with the same architecture but different initializations to form the ensemble dynamic models. The learning rate of the models is set to 0.001. We train each ensemble dynamic model M_i using the maximum log-likelihood estimation based on the corresponding offline dataset D_i: max_M_i𝔼_(s,a,s',r)∼ D_i[logM_i(s',r|s,a)].We use a holdout ratio 0.2 to divide the training set and the validation set, and stop training when the validation error no longer decreases for 5 epochs. Specifically, for environments where tasks differ in reward functions (Point-Robot, Cheetah-Dir, Cheetah-Vel, and Ant-Dir), we construct the model as M_i(r|s,a) and only use (s, a, r) triplet to form the training dataset. For environments where tasks differ in dynamics and reward functions (Hopper-Params, and Walker-Params), we construct the model as M_i(s',r|s,a) and use four-arity tuple (s, a, r, s') to form the training dataset. The dataset is also used to train TAE subsequently.In GENTLE, TAE applies an encoder-decoder architecture to extract the representation of the tasks. The encoder and decoder are both represented by MLPs, which consist of 3 hidden layers with 256 hidden units.The details of important configurations and hyperparameters used to produce the experimental results in the paper are presented in Table <ref>. §.§ Time Complexity AnalysisTo investigate the time complexity of the algorithms, we run GENTLE, FOCAL, and CORRO on the same dataset and machine, and results are shown in Table <ref>. GENTLE's pretraining (0.29h) and relabeling (0.02h) incur little time overhead, and its training time is slightly longer than that of FOCAL. §.§ Additional Experiment ResultsAssuming that we have access to the transition function and the reward function of each task, which we term as the oracle model. We can leverage this oracle model for data relabeling. To explain further, we replace the dynamics model trained via supervised learning with the oracle model for dynamics-relabeling. We carry out the experiments on Cheetah-Vel, and the results are depicted in Figure <ref>. It indicates that the accuracy of relabeling data has a certain influence on the performance. Utilizing a precise oracle model for relabeling can lead to more stable performance. | http://arxiv.org/abs/2312.15909v1 | {
"authors": [
"Renzhe Zhou",
"Chen-Xiao Gao",
"Zongzhang Zhang",
"Yang Yu"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231226070212",
"title": "Generalizable Task Representation Learning for Offline Meta-Reinforcement Learning with Data Limitations"
} |
]A higher-order generalization of an A_4^(1)-surface type q-Painlevé equation with W((A_2N⋊ A_1)^(1)× A_1^(1)) symmetryInstitute of Engineering, Tokyo University of Agriculture and Technology, 2-24-16 Nakacho Koganei, Tokyo 184-8588, Japan. [email protected] Recently, a birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)-type, which gives a higher-order generalization of an A_4^(1)-surface type q-Painlevé equation, was obtained. In this paper, we extend it to a birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)× A_1^(1)-type. Moreover, we provide conjectures on periodic reductions from systems of PΔEs with the CAC property to Painlevé type q-OΔEs. [2020] 33E17,35Q53,37K10,39A13,39A14,39A23,39A36,39A45 [ Nobutaka Nakazono January 14, 2024 ===================== § INTRODUCTIONFix an integer N>0. In this study, we focus on the symmetry of the following 2Nth-order q-<cit.>:qP^(2N)(A_4^(1)) :a_i^2N+1(F_iF_i-1)a_i^2N+1-c^2(-1)^iF_i= F_i+1F_i+1-11-a_i+1^2N+1c^2(-1)^iF_i+1 ifi=1,…,2N-1, (∏_k=1^2Na_k^k)bc∏_k=1^N F_2k-1 ifi=2N,where b∈ is an independent variable, F_i=F_i(b)∈, i=1,…,2N, are dependent variables, and a_1,…,a_2N,c,p∈ are parameters. The symbol 0em0.5em denotes discrete-time evolution:b=pb,F_i=F_i(pb), i=1,…,2N,a_j=a_j, j=1,…,2N,c=c^-1,where p∈ is a constant. When N=1, the system (<ref>) is equivalent to a q-Painlevé V equation of A_4^(1)-surface type <cit.>. (See <cit.> for further details.)For a q-difference equation, the symbol “q" is commonly used for its shift parameter. However, in this paper, the symbol “q" is used in the later arguments, and the relations between the symbol “p" in the system (<ref>) and the symbol “q" in later arguments will be given. Therefore, to avoid confusion, we use the symbol “p" for the system (<ref>) instead of the symbol “q". The following property of the system (<ref>) is known.The system (<ref>) has an extended affine Weyl group symmetry of (A_2N⋊ A_1)^(1)-type, that is, it can be derived from a birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)-type, denoted by W((A_2N⋊ A_1)^(1)). An affine Weyl group of (A_2N⋊ A_1)^(1)-type refers to the semi-direct product of an affine Weyl group of A_2N^(1)-type and that of A_1^(1)-type. The extended affine Weyl group of (A_2N⋊ A_1)^(1)-type in Theorem <ref> indicates an affine Weyl group of (A_2N⋊ A_1)^(1)-type extended by an automorphism of Dynkin diagrams. For more details, see <ref>. In particular, refer to (<ref>) for the fundamental relations of: W((A_2N⋊ A_1)^(1))=(⟨ s_0,…,s_2N⟩⋊⟨ w_0,w_1⟩)⋊⟨ r⟩.It is known that Painlevé-type s are derived from birational actions of the infinite order elements of (extended) affine Weyl groups. For instance, considering the transformation T∈W((A_2N⋊ A_1)^(1)) acting on the parameters {a_1,…,a_2N,b,c,p} such that T(a_i)=a_i, i=1,…,2N,T(b)=pb,T(c)=c^-1,T(p)=p,as a time evolution, we obtain the system (<ref>).(See Remark <ref> for transformation T.) In this context, parameter b can be regarded as the independent variable of the system (<ref>). The group W((A_2N⋊ A_1)^(1)) includes transformations that regard not only the parameter b but also the parameter a_i as independent variables. For instance, there exists a transformation T̂∈W((A_2N⋊ A_1)^(1)) that satisfies the following action on the parameters:T̂(a_1)=pa_1,T̂(a_i)=a_i, i=2,…,2N,T̂(b)=b,T̂(c)=c,T̂(p)=p.(See Remark <ref> for the transformation T̂.) However, focusing on the action on the parameter c, it is observed that W((A_2N⋊ A_1)^(1)) contains only transformations acting as c→ c or c→ c^-1. This is attributed to the fact that in the study of <cit.>, the parameter c is not an essential parameter derived from the system of s considering reduction but rather a parameter that can be set to 1 through a gauge transformation before reduction.The purpose of this study is to extend the transformation group W((A_2N⋊ A_1)^(1)) to include a transformation T that regards the parameter c as an independent variable of an . To achieve this, we consider the reduction of a different system of s from that treated in <cit.>. (See Remark <ref> for the transformation T.)The main theorem of this paper is the following.The system (<ref>) has an extended affine Weyl group symmetry of (A_2N⋊ A_1)^(1)× A_1^(1)-type, that is, it can be derived from a birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)× A_1^(1)-type, denoted by W((A_2N⋊ A_1)^(1)× A_1^(1)).This theorem is proven in <ref>. An affine Weyl group of (A_2N⋊ A_1)^(1)× A_1^(1)-type means the direct product of an affine Weyl group of (A_2N⋊ A_1)^(1)-type and that of A_1^(1)-type. The extended affine Weyl group of (A_2N⋊ A_1)^(1)× A_1^(1)-type in Theorem <ref> indicates an affine Weyl group of (A_2N⋊ A_1)^(1)× A_1^(1)-type extended by an automorphism of Dynkin diagrams. For details, see <ref>. In particular, see (<ref>) and (<ref>) for the fundamental relations of W((A_2N⋊ A_1)^(1)× A_1^(1))=((⟨ s_0,…,s_2N⟩⋊⟨ w_0,w_1⟩)×⟨μ_0,μ_1⟩)⋊⟨ r⟩.§.§ Background §.§.§ Painlevé equationsIn the early 20th century, to find a new class of special functions, Painlevé and Gambier classified all the ordinary differential equations of the type y”=F(y',y,t),where y=y(t), '=d/dt and F is a function meromorphic in t and rational in y and y',with the Painlevé property (solutions do not have movable singularities other than poles)<cit.>. As a result, they obtained six new equations that are collectively referred to as Painlevé equations. Note that the Painlevé VI equation was obtained by Fuchs <cit.> before Painlevé and Gambier.After the discovery, the Painlevé equations withdrew from the stage of “modern mathematics" for a while. The Painlevé equations regained attention after the 1970s because they appeared in mathematical physics research. For instance, solutions of the Painlevé equations (Painlevé transcendents) were rediscovered as scaling functions for the two-dimensional Ising model on a square lattice <cit.> and as similarity solutions of the soliton equations describing specific shallow water waves (solitons) <cit.>.In the late 20th century, Okamoto described a geometric framework (Okamoto's space of initial values) to study Painlevé equations <cit.>. Consequently, the Painlevé III equation is classified into three types, which in turn classifies the Painlevé equations into eight types. §.§.§ q-Painlevé equationsq-Painlevé equations are a family of second-order nonlinear q-s. Historically, they were obtained as q-discrete analogues of the Painlevé equations (see, for example, <cit.>). Similar to the Painlevé equations, q-Painlevé equations are known to describe special solutions of various discrete soliton equations <cit.>. In particular, famous is imposing periodic conditions on the discrete soliton equations<cit.>.There are six (or eight) Painlevé equations; however, an infinite number of q-Painlevé equations are known to exist. By considering Sakai's space of initial values <cit.>, which is an extension of Okamoto's space of initial values, q-Painlevé equations can be classified into nine surface types (see Figure <ref>). In this study, a q-Painlevé equation of X-surface type refers to one q-Painlevé equation belonging to Sakai's space of initial values of type X.Note that infinitely many q-Painlevé equations exist for the same surface type.§.§.§ (Extended) affine Weyl group symmetry A transformation from an integrable system to an integrable system is called a Bäcklund transformation.In the case of a Painlevé-type differential/difference equation, self-Bäcklund transformations exist, which are transformations that map an equation to itself with varying parameters.These collectively form an (extended) affine Weyl group. (See, for example <cit.>.) In this sense, the equation is said to have (extended) affine Weyl group symmetry. Note that an (extended) affine Weyl group symmetry of a Painlevé-typedoes not always refer solely to its Bäcklund transformations, but it often refers to a large group of transformations, including its time evolution. For example, see Theorems <ref> and <ref>. §.§.§ (Extended) KNY's representationIn <cit.>, Kajiwara-Noumi-Yamada showed a birational representation of an extended affine Weyl group of (A_m-1× A_n-1)^(1)-type (KNY's representation), where m and n are integers greater than or equal to 2, except for (m,n)=(2,2). Note that KNY's representation is essentially the same even if m and n are interchanged <cit.>. Recently, it has been reported that KNY's representation can be extended to a birational representation of an extended affine Weyl group of (A_m-1× A_n-1× A_g-1)^(1)-type (extended KNY's representation), where g is the common greatest divisor of m and n <cit.>.It is well known that the (A_L× A_1)^(1)-type KNY's representation, where L∈_≥ 2, yields Painlevé-type q-s, including q-Painlevé equations as second-order q-s. Indeed, the (A_2× A_1)^(1)-type KNY's representation gives q-Painlevé equations of A_5^(1)-surface type <cit.>, and the (A_3× A_1)^(1)-type gives q-Painlevé equations of A_3^(1)-surface type <cit.>. As mentioned above, the (A_2N+1× A_1)^(1)-type KNY's representation, where N∈_≥ 1, can be extended to the (A_2N+1× A_1× A_1)^(1)-type extended KNY's representation. In <cit.>, the explicit forms of the Painlevé-type q-s were obtained from the (A_2N+1× A_1× A_1)^(1)-type extended KNY's representation. §.§.§ Motivation for this studyIn <cit.>, a birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)-type, where N∈_≥ 1,was constructed, and the system (<ref>) was derived from its action.This representation is presumed to be a degenerate version of KNY's representation (see Appendix <ref>); therefore, similar to KNY's representation, it is believed to be extendable.This is the motivation for the present study. It should be noted that the methods used to extend them differ.In this study, the extension was explored using the theory of consistency around the cube (CAC) property (see <cit.> for the CAC property), whereas the extension of KNY's representation was performed using the theory of cluster algebra (see <cit.> for the theory of cluster algebra for discrete integrable systems).§.§ Notation and TerminologyThis paper will use the following notations and terminologies for conciseness. * An ordinary difference equation is written asand a partial difference equation is written as .In particular, an ordinary multiplicative-type difference (q-difference) equation is expressed as q-.* For transformations s and r, the symbol sr means the composite transformation s∘ r.* In the context of transformations, the “1" signifies the identity transformation.* For transformation s, the relation s^∞=1 implies that there is no positive integer k such as s^k=1.* If the subscript number is greater than the superscript number in the product symbol, 1 is assumed. For example,∏_k=1^0 2^k=1. * If the subscript number is greater than the superscript number in the summation symbol, 0 is assumed. For example,∑_k=1^0 2^k=0. * When an equation number has a subscript such as "l=0", it signifies the equation obtained by substituting l=0 into the equation corresponding to that equation number. For example, with reference to a_l+a_l+1+a_l+2=0,the equation (<ref>)_l=0 is equivalent to a_0+a_1+a_2=0.The same applies to the case where “l→ l+1" or other symbols are in the subscripts. For instance, the equation (<ref>)_l→ l+1 is equivalent to:a_l+1+a_l+2+a_l+3=0. §.§ Outline of the paperThis paper is organized as follows.In <ref>, we introduce system (<ref>) and the transformations {,w,μ} that remain the system invariant. Subsequently, by imposing periodic conditions on the system (<ref>), we obtain the birational actions of transformations {,w,μ}.In <ref>, using the transformation group W((A_2N⋊ A_1)^(1)) given in <cit.> and transformations {,w,μ}, we prove Theorem <ref>.Some concluding remarks are given in <ref>.In Appendix <ref>, we show that system (<ref>) has the CAC and tetrahedron properties.In Appendix <ref>, we provide conjectures on periodic reductions from systems of s with the CAC property to Painlevé type q-s.§ TRANSFORMATIONS , W AND Μ In this section, we define the transformations {,w,μ} and obtain their birational actions necessary for the proof of Theorem <ref> through a staircase reduction of a system of s. §.§ A system of s with the CAC propertyLet (l,m)∈^2. We focus on the following system of s for U_l,m,V_l,m∈: U_l+1,m+1U_l,m+_l+m^4U_l,m+1U_l+1,m+_l_m_l+m=0,V_l+1,m+1V_l,m+_l+m^4V_l,m+1V_l+1,m+_l_m_l+m=0,U_l,m+1V_l,m=V_l,m+1U_l,m (if (l+m) is even), ^4^4(V_l,m+1U_l,m+_m^4(-)^l-2m-1) (if (l+m) is odd),U_l,mV_l+1,m=^3^3(V_l,mU_l+1,m-^3(-)_l^l-2m) (if (l+m) is even), (U_l+1,mV_l,m+(-)_l^l-2m-1)^-1 (if (l+m) is odd), where {_k,_k,_k,_k,,,}_k∈ are complex parameters satisfying_l= (if l is even), ^-1 (if l is odd),_m= (if m is even), ^-1 (if m is odd),=.The system (<ref>) has the CAC and tetrahedron properties. Further details are provided in Appendix <ref>.The transformations {,w,μ} are defined by the actions on the parameters {_k,_k,_k,_k,,,}_k∈ and the variables {U_l,m,V_l,m} as follows. (_k)=^2_k+2,(_k)=^-2_k,(_k)=_k,(_k)=_k,()=,()=,()=,(U_l,m)=U_l+2,m,(V_l,m)=V_l+2,m,w(_k)=^2N-4_-k+2N,w(_k)=^-2N+4_-k-1,w(_k)=_k^-1,w(_k)=_k^-1,w()=^-1,w()=^-1,w()=^-1,w(U_l,m)=1U_-l+2N+1,-m,w(V_l,m)=1V_-l+2N+1,-m,μ(_k)=_k,μ(_k)=_k,μ(_k)=_k,μ(_k)=_k,μ()=,μ()=,μ()=,μ(U_l,m)=V_l,m,μ(V_l,m)=U_l,m. We can easily verify that system (<ref>) is invariant under the actions of transformations {,w,μ} and that these transformations satisfy the following relations:^ ∞=w^2=μ^2=1, w=w^-1,μ=μ,wμ=μ w. §.§ A staircase reduction of the system (<ref>)Fix an integer N>0. We consider the (2N+1,1)-periodic conditionsU_l+2N+1,m+1=U_l,m,V_l+2N+1,m+1=V_l,m,which are equivalent to expressing U_l,m and V_l,m asU_l,m=U(l-(2N+1)m),V_l,m=V(l-(2N+1)m).By substituting (<ref>) into (<ref>), we obtain the following system of s: U(l)U(l+2N+2)+_l^4U(l+1)U(l+2N+1)+_l _0_l^3^2N-1=0,V(l)V(l+2N+2)+_l^4V(l+1)V(l+2N+1)+_l _0_l^3^2N-1=0,U(2l+1)V(2l+2N+2)-V(2l+1)U(2l+2N+2)=0,U(2l)V(2l+2N+1)-^4V(2l)^4U(2l+2N+1)-_0 ^4(-)^2(l+N)=0,U(2l)V(2l+1)-^3V(2l)^3U(2l+1)+^3(-)^2l_2l=0,U(2l+2)V(2l+1)- V(2l+2) U(2l+1)+(-)^2l_2l+1=0, together with the conditions for the parameters_l+2N+1=^-2N+1_l,_m+1=^2N-1_m.Define the parameters {a_0,…,a_2N,b,c,q} and the variables {f(l),g(l)} by a_0=^1-2N/2N+1(_0_2N)^1/2N+1,a_i=(_i_i-1)^1/2N+1, i=1,…,2N,b=(∏_k=0^2N_k)^1/2N+1_0,c=,q=^1-2N/2N+1,f(l)=U(l-1)U(l+1),g(l)=V(l-1)V(l+1), where the following holds:∏_i=0^2Na_i=q.Then, the following lemma holds.The following 2Nth-order q-s for f(l) and g(l) hold. f(l+2N+1)f(l+1)(∏_k=1^N-1f(l+2k+1))+_l^4(∏_k=1^Nf(l+2k))=-q^l+2Na_l b _l^3∏_k=1^2N-1a_l+k^2N-k,g(l+2N+1)g(l+1)(∏_k=1^N-1g(l+2k+1))+_l^4(∏_k=1^Ng(l+2k))=-q^l+2Na_l b _l^3∏_k=1^2N-1a_l+k^2N-k, where a_i+2N+1=a_i for arbitrary i∈. Here, _l and _l are expressed using c and q, respectively, as follows:_l=c (if l is even), 1c (if l is odd),_l=c (if l is even), c (if l is odd),where =q^2N+1/1-2N.Equations (<ref>) and (<ref>) are obtained from Equations (<ref>) and (<ref>), respectively. Define f_i=f(i),g_i=g(i), i=1,…,2N.Then, the following lemma holds.The following relations hold: (1-f_1a_1^2N+1c^2)f_1-^2N-3c^6-4Ng_1f_1-q^2N+1^2N-3c^4g_1+∑_k=1^N-1∏_i=1^k f_2i-1(∏_i=1^2ka_i^2N+1)c^4k(∏_i=1^k f_2i)(1-f_2k+1a_2k+1^2N+1c^2)+a_0^2N+1q^2N+1c^4N(∏_i=1^N f_2i)(0em2.5em.q^2Nbc∏_i=0^2N-1a_i^2N-i+∏_i=1^N f_2i-1.0em2.5em)=0,g_2l_1+3(^2f_2l_1+1-c^4g_2l_1+1)(a_2l_1+3^2N+1c^2-f_2l_1+3)f_2l_1+2(^2f_2l_1+3-c^4g_2l_1+3)(a_2l_1+1^2N+1c^2-f_2l_1+1)=^2a_2l_1+2^2N+1a_2l_1+3^2N+1,(c^2-^2a_2l_2+2^2N+1g_2l_2+2)(^2a_2l_2+1^2N+1-c^2g_2l_2+1)(1-a_2l_2+2^2N+1c^2f_2l_2+2)(a_2l_2+1^2N+1c^2-f_2l_2+1)=^2, where l_1=0,…,N-2,l_2=0,…,N-1,=q^2N+1/1-2N. For simplicity, letA(l)=^2l-3V(2l)^4l-3(-)U(2l),which satisfiesA(l)A(l+1)=^4g(2l+1)^2f(2l+1).Eliminating V(2l+1) from Equations (<ref>) and (<ref>), we obtainA(l+1)-A(l)=-U(2l+1)_2l^4lU(2l)(1-_2lU(2l)_2l+1^2U(2l+2)).Then, from Equation (<ref>), we obtain the following relations:A(N)=A(l+1)-∑_k=l+1^N-1U(2k+1)_2k^4k U(2k)(1-_2kU(2k)_2k+1^2U(2k+2)),A(0)=A(l)+∑_k=0^l-1U(2k+1)_2k^4kU(2k)(1-_2kU(2k)_2k+1^2U(2k+2)),where 0≤ l≤ N-1. Eliminating V(2N+1) from Equations (<ref>)_l=0 and (<ref>)_l=N, we obtainA(N)-^2N-1^4N-2A(0)=U(2N+1)_2N^4NU(0)(_2N_0+U(0)U(2N)).By eliminating A(0) and A(N) from Equations (<ref>), (<ref>), and (<ref>), we obtain:A(l+1)-^2N-1^4N-2A(l)= ∑_k=l+1^N-1U(2k+1)_2k^4k U(2k)(1-_2kU(2k)_2k+1^2U(2k+2))+^2N-1^4N-2∑_k=0^l-1U(2k+1)_2k^4kU(2k)(1-_2kU(2k)_2k+1^2U(2k+2))+U(2N+1)_2N^4NU(0)(_2N_0+U(0)U(2N)).From (<ref>) and (<ref>), we obtain(1-_2lU(2l)_2l+1^2U(2l+2))A(l+1)-^2N-1^4N-2A(l)A(l+1)-A(l)=-∑_k=l+1^N-1_2l U(2k+1)U(2l)_2k^4(k-l) U(2l+1)U(2k)(1-_2kU(2k)_2k+1^2U(2k+2))-^2N-1^4N-2∑_k=0^l-1_2l^4(l-k)U(2k+1)U(2l)_2kU(2l+1)U(2k)(1-_2kU(2k)_2k+1^2U(2k+2))-_2lU(2N+1)U(2l)_2N^4(N-l)U(2l+1)U(0)(_2N_0+U(0)U(2N)),where 0≤ l≤ N-1. Equation (<ref>)_l=0 gives (<ref>).From Equations (<ref>) and (<ref>)_l→ l+1, we obtainA(l+1)-A(l)A(l+2)-A(l+1)=_2l+2^4U(2l+2)U(2l+1)_2lU(2l)U(2l+3)(1-_2lU(2l)_2l+1^2U(2l+2)1-_2l+2U(2l+2)_2l+3^2U(2l+4)),where 0≤ l≤ N-2, which gives Equation (<ref>).By solving Equation (<ref>) with V(2l) and Equation (<ref>)_l→ l+1 with V(2l+3), and substituting the results into Equation (<ref>), we obtain(U(2l+2)V(2l+1)- V(2l+2) U(2l+1)+(-)^2l_2l+1)×(U(2l)V(2l+1)-_2l+2^5V(2l+2)_2l U(2l+3)+^3(-)^2l_2l)=0.The equation above holds because of Equation (<ref>). Therefore, Equation (<ref>) holds. Let 𝒦 be a field of rational functions overdefined as𝒦=(a_1,…,a_2N,b,c,q^1/2N-1).Subsequently, from Lemmas <ref> and <ref>, the following lemma holds.Variables f(l) and g(l) are expressed as rational functions of variables {f_1,…,f_2N} over 𝒦.From Equations (<ref>) and (<ref>), the variables f(l) and g(l) are expressed as a rational function of {f_1,…,f_2N} and that of {g_1,…,g_2N} over 𝒦, respectively.The variables g_i, i=1,…,2N, are shown to be rational functions of {f_1,…,f_2N} over 𝒦, as described below. From (<ref>), it is evident that g_1 is a rational function of the variables {f_1,…,f_2N}. From (<ref>), g_2i+3, i=0,…,N-2, are sequentially shown to be rational functions of the variables {f_1,…,f_2N}. Finally, g_2j+2, j=0,…,N-1, are shown to be rational functions of variables {f_1,…,f_2N} using (<ref>). We can easily verify that the conditions of reduction (<ref>) and (<ref>) are also invariant under the actions of the transformations {,w,μ}. Therefore, these transformations can be used even after reduction. The actions of these transformations can be obtained from (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). These are given in the following lemma.The actions of the transformations {,w,μ} on the parameters {a_0,…,a_2N,b,c,q} are given by (a_i)=a_i+2,(b)=q^2b,(c)=c,(q)=q,w(a_i)=1a_2N+1-i,w(b)=q^2N+1b,w(c)=c^-1,w(q)=q^-1,μ(a_i)=a_i,μ(b)=b,μ(c)=c,μ(q)=q, where i∈/(2N+1) and =q^2N+1/1-2N, while their actions on the variables {f_1,…,f_2N} are given by (f_i)=f_i+2, i=1,…,2N-2,(f_2N-1)=f(2N+1),(f_2N)=f(2N+2),w(f_j)=f_2N+1-j,μ(f_j)=g_j,j=1,…,2N. The variables f(2N+1) and f(2N+2) are given by (<ref>), and variables {g_1,…,g_2N} are given by (<ref>).§ PROOF OF THEOREM <REF>In this section, we first review the transformation group W((A_2N⋊ A_1)^(1)) given in <cit.>. Then, using the transformations {,w,μ} given in <ref>, we extend it to the transformation group W((A_2N⋊ A_1)^(1)× A_1^(1)). §.§ Review of the transformation group W((A_2N⋊ A_1)^(1))This subsection reviews the result in <cit.>.Let us define the action of the transformations {s_1,…,s_2N,w_1,π} on the complex parameters {a_0,…,a_2N,b,c,q} and complex variables {f_1,…,f_2N}. These parameters satisfy the following relation:∏_i=0^2Na_i=q.The actions of the transformations {s_1,…,s_2N,w_1,π} on the parameters {a_0,…,a_2N,b,c,q} are given by s_i(a_j)=a_i^-1 ifj=i, a_ia_j ifj=i± 1, a_j otherwise,i=1,…,2N-1,s_2N(a_j)=a_2N^-1 ifj=2N, a_2Na_j ifj=0,2N-1, a_j otherwise,s_k(b)=b,s_k(c)=c,s_k(q)=q,k=1,…,2N,w_1(a_j)=1a_2N+1-j,w_1(b)=q^2N+1b,w_1(c)=c^-1,w_1(q)=q^-1,π(a_j)=a_j+1,π(b)=qb,π(c)=c^-1,π(q)=q, where j∈/(2N+1), while their actions on the variables {f_1,…,f_2N} are given by s_i(f_j)=f_i-1(i-1)^2-a_i^2N+1f_ia_i^2N+1(i-1)^2-f_i ifj=i-1, f_i+1a_i^2N+1(i-1)^2-f_i(i-1)^2-a_i^2N+1f_i ifj=i+1, f_j otherwise,i,j=1,…,2N,w_1(f_j)=f_2N+1-j,j=1,…,2N,π(f_j)=f_j+1 ifj=1,…,2N-1, -c^4∏_k=1^N f_2k-1(∏_k=1^N f_2k+q^2Na_0b(∏_k=1^2N-1a_k^2N-k)c) ifj=2N, where(l)=c (if l is even), c^-1 (if l is odd). Define the transformation s_0 bys_0=π^-1s_1π.Then, the transformations {s_0,…,s_2N,w_1,π} satisfy the following relations: s_i^2=(s_is_i± 1)^3=(s_is_j)^2=1, j≠ i± 1,w_1^2=1,w_1 s_k=s_2N-k+1w_1,π^∞=1,π s_k=s_k+1π,π w_1=w_1π^-1, where i,j,k∈/(2N+1). Moreover, by defining the transformations w_0 and r byw_0=π^2w_1,r=π w_1,the transformations {s_0,…,s_2N,w_0,w_1,r} satisfy the following relations: s_i^2=(s_is_i± 1)^3=(s_is_j)^2=1, j≠ i± 1,w_0^2=w_1^2=(w_0w_1)^∞=1,w_0s_i=s_2N-i+3w_0,w_1s_i=s_2N-i+1w_1,r^2=1,rs_i=s_2N-i+2r,rw_0=w_1r,rw_1=w_0r, where i,j∈/(2N+1). From the relation (<ref>), the following hold. * Transformation groups ⟨ s_0,…,s_2N⟩ and ⟨ w_0,w_1⟩ form affine Weyl groups of type A_2N^(1) and type A_1^(1), respectively.* The transformation group ⟨ s_0,…,s_2N,w_0,w_1⟩ is a semidirect product of ⟨ s_0,…,s_2N⟩ and ⟨ w_0,w_1⟩, that is, ⟨ s_0,…,s_2N,w_0,w_1⟩=⟨ s_0,…,s_2N⟩⋊⟨ w_0,w_1⟩.We denote it as W((A_2N⋊ A_1)^(1)).* The transformation r corresponds to a reflection of the Dynkin diagram of type A_2N^(1) associated with ⟨ s_0,…,s_2N⟩ and to that of type A_1^(1) associated with ⟨ w_0,w_1⟩. Therefore, we refer to the transformation group ⟨ s_0,…,s_2N,w_0,w_1,r⟩ as an extended affine Weyl group of type (A_2N⋊ A_1)^(1) and denote it as W((A_2N⋊ A_1)^(1)), that is,W((A_2N⋊ A_1)^(1))=(⟨ s_0,…,s_2N⟩⋊⟨ w_0,w_1⟩)⋊⟨ r⟩.Define the transformations T_0,T_1∈W((A_2N⋊ A_1)^(1)) asT_0=π^-2N-1,T_1=π s_2Ns_2N-1⋯ s_1,whose actions on the parameters {a_0,…,a_2N,b,c,q} are given by T_0(a_i)=a_i, i=0,…,2N,T_0(b)=q^-2N-1b,T_0(c)=c^-1,T_0(q)=q,T_1(a_j)=qa_0 ifj=0, q^-1a_1 ifj=1, a_j otherwise,T_1(b)=qb,T_1(c)=c^-1,T_1(q)=q. The system (<ref>) can be obtained from the action of T_0 with the following correspondence (see <cit.> for details):0em0.5em =T_0,F_i=f_i,p=q^-2N-1.Thus, the transformation T satisfying (<ref>) is given byT=T_0.Also, the transformation T̂ satisfying (<ref>) is given byT̂=T_1^2N+1T_0.Note that in <ref>, we omit parameter a_0 using relation (<ref>), and we use p=q^-2N-1 instead of q. The transformation π∈W((A_2N⋊ A_1)^(1)) gives the following relation:π^2N(f_1)f_1(∏_k=1^N-1π^2k(f_1))+c^4(∏_k=1^N π^2k-1(f_1))+q^2Na_0b c^3∏_k=1^2N-1a_k^2N-k=0.Applying π^l on the equation above and settingf(k)=π^k-1(f_1),we obtain the 2Nth-order q- (<ref>). §.§ Proof of Theorem <ref>In this subsection, using the transformations {,w,μ} given in <ref> and transformation group W((A_2N⋊ A_1)^(1)) shown in <ref>, we prove Theorem <ref>. For this purpose, in this subsection, we discuss the proof under the following setup: * The parameters {a_0,…,a_2N,b,c,q} and variables {f_1,…,f_2N} defined in <ref> are considered identical to those in <ref>.* We consider that the transformations {,w,μ} have actions defined only on the parameters {a_0,…,a_2N,b,c,q} and variables {f_1,…,f_2N}. Comparing the actions of the transformations {,w} (<ref>) and (<ref>) with those of {π,w_1} (<ref>) and (<ref>), we obtain: =π^2,w=w_1,which implies that ,w∈W((A_2N⋊ A_1)^(1)). We note Remark <ref> for the action of π. Next, we focus on transformation μ. It is evident from the actions on parameter c that transformation μ is not contained within W((A_2N⋊ A_1)^(1)).Therefore, in the following, we consider what type of extended affine Weyl group is formed by extending the transformation group W((A_2N⋊ A_1)^(1)) through the addition of transformation μ.The following relations hold:s_1 μ=μ s_1,w_1μ=μ w_1,π^2μ=μπ^2,(μπ)^∞=1. Because of (<ref>) and (<ref>), the relations w_1μ=μ w_1 and π^2μ=μπ^2. For a positive integer k, the following holds:(μπ)^k(c)=^-kc,which gives (μπ)^∞=1.Let us prove that the relation s_1 μ=μ s_1 holds. Because the relation obviously holds for the action on the parameters, we consider the action on the f-variables. Applying s_1 to Equation (<ref>), we obtain(1-a_1^2N+1f_1c^2)f_1-^2N-3c^6-4Ns_1(g_1)f_1-q^2N+1^2N-3c^4s_1(g_1)+a_1^2N+1(c^2-a_1^2N+1f_1)a_1^2N+1c^2-f_1{ ∑_k=1^N-1∏_i=1^k f_2i-1(∏_i=1^2ka_i^2N+1)c^4k(∏_i=1^k f_2i)(1-f_2k+1a_2k+1^2N+1c^2)..+a_0^2N+1q^2N+1c^4N(∏_i=1^N f_2i)(0em2.5em.q^2Nbc∏_i=0^2N-1a_i^2N-i+∏_i=1^N f_2i-1.0em2.5em) }=0.By eliminating the second and third terms of Equation (<ref>) using the equation above, we obtain(1-f_1a_1^2N+1c^2)f_1-^2N-3c^6-4Ng_1f_1-q^2N+1^2N-3c^4g_1=a_1^2N+1c^2-f_1a_1^2N+1(c^2-a_1^2N+1f_1) (1-a_1^2N+1f_1c^2)f_1-^2N-3c^6-4Ns_1(g_1)f_1-q^2N+1^2N-3c^4s_1(g_1),which, upon simplification, gives s_1(g_1)=g_1.Furthermore, by comparing the equation obtained by applying s_1 to Equation (<ref>)_l_1=0 with Equation (<ref>)_l_1=0, we obtains_1(g_3)=g_3.Similarly, we can proves_1(g_2l_1+3)=g_2l_1+3, l_1=1,…,N-2,sequentially from Equation (<ref>). Finally, by comparing the equation obtained by applying s_1 to Equation (<ref>) with Equation (<ref>), we obtain:s_1(g_2l_2+2)=g_2l_2+2, l_2=0,…,N-1.Therefore, we obtains_1 μ(f_l)=s_1(g_l)=g_l=μ(f_l)=μ s_1(f_l), l=1,…,2N.Thus, s_1 μ=μ s_1 holds. Defining the transformation μ_0 and μ_1 asμ_0=π^-1μπ,μ_1=μ,the following lemma holds.The following relations hold:μ_i^2=(μ_0μ_1)^∞=1,μ_i s_j=s_j μ_i,μ_i w_k=w_k μ_i,rμ_0=μ_1 r,rμ_1=μ_0 r,where i=0,1,j=0,…,2N,k=0,1. Relation (μ_0μ_1)^∞=1 holds becuse for positive integer k, the following holds:(μ_0μ_1)^k(c)=^2kc.The other relations are shown using Equations (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) as follows:μ_0^2=(π^-1μπ)(π^-1μπ)=π^-1μ^2π=1,μ_1^2=μ^2=1,rμ_0=(π w_1)(π^-1μπ) =w_1 π^-2μπ =w_1 μπ^-1 =μ w_1 π^-1 =μ (π w_1) =μ_1 r,rμ_1=rμ_1r^2=r (μ_1 r) r=r (r μ_ 0) r=r^2 μ_ 0 r=μ_ 0 r,μ_1 w_1=μ w_1=w_1 μ=w_1 μ_1,μ_1 w_0=μ (π^2 w_1)=(π^2 w_1)μ=w_0 μ_1,μ_0 w_1=μ_0 r^2 w_1=(μ_0 r) (r w_1)=r(μ_1 w_0)r=r(w_0 μ_1)r=(w_1 r)(r μ_0)=w_1 μ_0,μ_0 w_0=μ_0 r^2 w_0=(μ_0 r)(r w_0)=r(μ_1 w_1)r=r(w_1μ_1)r=(w_0 r)(r μ_0)=w_0μ_0,μ_1 s_1=μ s_1=s_1 μ=s_1μ_1,μ_1 s_2i+1=μ (π^2is_1π^-2i)=(π^2is_1π^-2i)μ=s_2i+1μ_1,μ_1 s_2N=μ_1 w_1^2 s_2N=(μ_1 w_1)(w_1 s_2N)=w_1 (μ_1 s_1) w_1=w_1 (s_1 μ_1) w_1=s_2Nμ_1,μ_1 s_2j=μ_1(π^2(j-N)s_2Nπ^2(N-j))=(π^2(j-N)s_2Nπ^2(N-j))μ_1=s_2jμ_1,μ_0 s_k=(π^-1μ_1 π) s_k=π^-1μ_1 s_k+1π=π^-1 s_k+1μ_1 π =s_k(π^-1μ_1 π)=s_k μ_0, wherei=1,…,N-1,j=0,…,N-1,k∈/(2N+1).From equation (<ref>), we find that the transformation group ⟨μ_0,μ_1⟩ forms the affine Weyl group of type A_1^(1), and is orthogonal to the transformation groupW((A_2N⋊ A_1)^(1))=⟨ s_0,…,s_2N⟩⋊⟨ w_0,w_1⟩.The transformation r corresponds not only to a reflection of the Dynkin diagram of type A_2N^(1) associated with ⟨ s_0,…,s_2N⟩ and that of type A_1^(1) associated with ⟨ w_0,w_1⟩, but also to a reflection of the Dynkin diagram of type A_1^(1) associated with ⟨μ_0,μ_1⟩. Therefore, we refer to the transformation group⟨ s_0,…,s_2N,w_0,w_1,μ_0,μ_1,r⟩=(W((A_2N⋊ A_1)^(1))×⟨μ_0,μ_1⟩)⋊⟨ r⟩as an extended affine Weyl group of type (A_2N⋊ A_1)^(1)× A_1^(1) and denote it as W((A_2N⋊ A_1)^(1)× A_1^(1)),that is,W((A_2N⋊ A_1)^(1)× A_1^(1))=((⟨ s_0,…,s_2N⟩⋊⟨ w_0,w_1⟩)×⟨μ_0,μ_1⟩)⋊⟨ r⟩.Thus, we have completed the proof of Theorem <ref>.In the case N=1, the group W((A_2⋊ A_1)^(1)× A_1^(1)) can be reconstructed as a subgroup of the extended affine Weyl symmetry group of type A_4^(1) for A_4^(1)-surface type q-Painlevé equations <cit.>. Indeed, all elements of W((A_2⋊ A_1)^(1)× A_1^(1)) are given by the elements of the transformation group in <cit.>:W(A_4^(1))=⟨ s_0, s_1, s_2, s_3, s_4⟩⋊⟨σ,ι⟩as the following: s_0= s_0,s_1= s_3 s_4 s_3,s_2= s_1 s_2 s_1,w_0=σι s_2 s_4,w_1=ι,μ_0= s_4 s_0 s_1 s_0 s_4,μ_1= s_2 s_3 s_2,r=σ^3ι s_4. In this case, the correspondence between the parameters {a_0,a_1,a_2,b,c,q} and variables {f_1,f_2} on which W((A_2⋊ A_1)^(1)× A_1^(1)) acts and the parameters { a_0, a_1, a_2, a_3, a_4, q} and variables { f_1^(i), f_2^(i)}_i=1,…,5 on which W(A_4^(1)) acts is given below. a_0= a_0^-1/6,a_1= a_3^-1/6 a_4^-1/6,a_2= a_1^-1/6 a_2^-1/6,b=- a_0^1/4 a_1^7/12 a_2^1/12 a_3^5/12 a_4^1/12,c= a_0^1/4 a_1^1/4 a_4^1/4 a_2^1/4 a_3^1/4,q= q^-1/6,f_1= a_0^1/2 a_1^1/2 a_2^1/2(1- a_0 a_4 a_2 a_3 f_1^(2) f_1^(3)),f_2=- a_0^1/2 a_3^1/2 a_4^1/2 f_1^(3). The transformation T in <ref> is given by, for example, T=μ_0μ_1∈W((A_2N⋊ A_1)^(1)× A_1^(1))whose action on the parameters {a_0,…,a_2N,b,c,q} isT(a_i)=a_i, i=0,…,2N,T(b)=b,T(c)=^2c,T(q)=q,where =q^2N+1/1-2N.§ CONCLUDING REMARKSIn this study, we extended the birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)-type, the symmetry of the system (<ref>) obtained in <cit.>, to a birational representation of an extended affine Weyl group of (A_2N⋊ A_1)^(1)× A_1^(1)-type. For this purpose, we used the system (<ref>) with the CAC property.Surprisingly, a Painlevé-typeis related to a system ofsuch as system (<ref>), which consists of an alternation of two types of CAC-cubes (see Appendix <ref>). This finding suggests that the theory of CAC property is deeply connected to the theory of Painlevé type s. We provide conjectures on periodic reductions from systems of s with the CAC property to Painlevé type s in Appendix <ref>.§.§ AcknowledgmentThis work was supported by JSPS KAKENHI, Grant Numbers JP19K14559 and JP23K03145. § CAC AND TETRAHEDRON PROPERTIES OF THE SYSTEM (<REF>)This Appendix shows that system (<ref>) has the CAC and tetrahedron properties. (See <cit.> for the CAC and tetrahedron properties.)Let us consider system (<ref>) as a system of equations around cubes, as follows: Assign the variables U_l,m, V_l,m to vertices (l,m,0),(l,m,1)∈^3, respectively. Subsequently, for fixed l,m∈, the system (<ref>) can be regarded as equations on the faces (face equations) of the cube, whose vertices are given by:(l,m,0),(l+1,m,0),(l,m+1,0),(l+1,m+1,0),(l,m,1),(l+1,m,1),(l,m+1,1),(l+1,m+1,1).Here, we label the six faces of the cube as 𝒜, 𝒜', ℬ, ℬ', 𝒞, 𝒞' as illustrated in Figure <ref>. Subsequently, from the system (<ref>) the six face equations are given by 𝒜: U U+^4UU+_l _m=0, 𝒜': V V+^4VV+_l _m=0,ℬ: U V= V U, ℬ': U V=^4^4( V U+_m^4(-)^l-2m),𝒞: U V=^3^3( V U-^3(-)_l^l-2m), 𝒞': U V=( U V+(-)_l^l-2m-3)^-1, if (l+m) is even, and by 𝒜: U U+U^4U+_l _m=0, 𝒜': V V+V^4V+_l _m=0,ℬ: U V=^4^4( V U+_m^4(-)^l-2m-1), ℬ': U V= V U,𝒞: U V=( U V+(-)_l^l-2m-1)^-1, 𝒞': U V=^3^3( V U-^3(-)_l^l-2m-2), if (l+m) is odd. Here, U=U_l,m,U=U_l+1,m,U=U_l,m+1,U=U_l+1,m+1,V=V_l,m,V=V_l+1,m,V=V_l,m+1,V=V_l+1,m+1. Denote the cubes given by (<ref>) and (<ref>) as C^ (e)_l,m and C^ (o)_l,m, respectively.By direct calculation, we can verify that both cubes C^ (e)_l,m and C^ (o)_l,m have the CAC and tetrahedron properties. Because all cubes composing the system (<ref>) have the CAC and tetrahedron properties, system (<ref>) is said to have the CAC and tetrahedron properties. Note that the tetrahedron relations of the cube C^ (e)_l,m are given by U U-_l^l-2m-3(-)(U V+^2 U V+_l _mU V)=0,V V+_l^l-2m-3(-)(U V+^2 U V+_l _mU V)=0, while those of the cube C^ (o)_l,m are given by V V+_l^l-2m-2^3(-)(U V+^2 U V+_l _m ^3U V)=0,U U-_l^l-2m-2^3(-)(U V+^2 U V+_l _m ^3U V)=0. Adler-Bobenko-Suris <cit.> and Boll <cit.> classified six face equations of cubes with the CAC and tetrahedron properties. The face equations given by systems (<ref>) and (<ref>) belong to D_4 of the type H^6 in Boll's classification. Conversely, system (<ref>) can be regarded as a system of s obtained by space-filling two types of cubes C^ (e)_l,m and C^ (o)_l,m into the sublattice of lattice ^3 whose vertices are given by (<ref>). The cubes C^ (e)_l,m and C^ (o)_l,m are arranged in quadrilaterals of lattice ^2 as illustrated in Figure <ref>. § CONJECTURES ON PERIODIC REDUCTIONS FROM SYSTEMS OF S WITH THE CAC PROPERTY TO PAINLEVÉ TYPE Q-SFix the integers n≥ 1 and L≥ 0. In this appendix, we label a pair,the system of s (l_0⋯ n)^2U_ijU=^(i)(l_i) U_i-^(j)(l_j) U_j^(j)(l_j) U_i-^(i)(l_i) U_j,i<j, i,j∈{1,…,n},U_0 kU+(l_0⋯ n)^4U_0U_k+^(k)(l_k) (l_0) (l_0⋯ n)=0,k=1,…,n, and the periodic conditionU(l_1+1,…,l_n+1,l_0+L)=U(l_1,…,l_n,l_0), as an (n,L)-pair. Note that the (n,0)- and (1,L)-pairs are exceptions and are defined as follows: The (n,0)-pair is defined by a pair of the system (<ref>) and the conditionU(l_1,…,l_n,l_0)=G(l_1,…,l_n,l_0)ω(l_1,…,l_n,l_0),ω(l_1+1,…,l_n+1,l_0)=ω(l_1,…,l_n,l_0), where, depending on the value of n, it is necessary to specify an appropriate gauge function G(l_1,…,l_n,l_0). (See <cit.> for the (3,0)-pair.) On the other hand, the (1,L)-pair is defined as follow:U(l_1+1,l_0+1)U(l_1,l_0)+(l_0+l_1)^4U(l_1,l_0+1)U(l_1+1,l_0)+^(1)(l_1) (l_0) (l_0+l_1)=0,U(l_1+1,l_0+L)=U(l_1,l_0).Here, l_0,…,l_n∈ are lattice parameters,{^(1)(l),…,^(n)(l),(l),_0}_l∈ are complex parameters and U=U(l_1,…,l_n,l_0),U_i=U|_ l_i→ l_i+1,U_ij=U|_(l_i,l_j)→ (l_i+1,l_j+1),l_0⋯ n:=∑_i=0^nl_i,(l)=_0 l∈2, 1_0 otherwise.The system (<ref>) has the CAC and tetrahedron properties. Equation (<ref>) with fixed i and j is known as the lattice modified KdV equation <cit.> and belongs to H3 in ABS's classification <cit.>, whereas Equation (<ref>) with fixed k belongs to D_4 of the type H^6 in Boll's classification <cit.>. §.§ Conjectures on the (n,L)-pairThis subsection presents conjectures on the (n,L)-pair.Before providing conjectures on the (n,L)-pair, let us explain its results. * In <cit.>, we obtained a birational representation of an extended affine Weyl group of (A_n-1⋊ A_1)^(1)-type from the (n,1)-pair, where n≥ 2. Note that the (2,1)-pair and (3,1)-pair were previously studied in <cit.> and <cit.>, respectively. In particular, the birational representation for the (2N+1,1)-pair, where N≥ 1, was extended in this study to a birational representation of an extended affine Weyl group of type (A_2N⋊ A_1)^(1)× A_1^(1). * From the (1,L)-pair, where L≥ 2, we obtain the Painlevé type q-s given in <cit.>. See <ref> for details.* We can derive q-Painlevé equations of A_5^(1)-surface type from the (3,0)-pair (see <cit.>). Moreover, we can obtain q-Painlevé equations of A_6^(1)- and A_4^(1)-surface types from the (2,1)- and (3,1)-pairs, respectively (see <cit.>) andq-Painlevé equations of A_7^(1)- and A_6^(1)-surface types from the (1,2)- and (1,3)-pairs, respectively (see <ref>).The following are conjectures. * We expect that the (A_n-1× A_1)^(1)-type KNY's representation can be derived from the (n,0)-pair, where n≥ 3, in the same manner as in <cit.>. (It has been shown in <cit.> that this conjecture is correct for the (3,0)-pair.) In particular, the birational representation for the (2N+2,0)-pair, where N≥ 1, can be extended to the (A_2N+1× A_1× A_1)^(1)-type extended KNY's representation by using a different pair of a system of s and a periodic condition from the (2N+2,0)-pair, as demonstrated in this study.* We expect to obtain q-Painlevé equations of A_5^(1)- and A_3^(1)-surface types from the (2,2)- and (4,0)-pairs, respectively.* Through a limit operation, the (n, L)-pair can be degenerated to the (n-1, L+1)-pair.As illustrations, limit operations from the (3, L)-pair, where L≠0, to the (2, L+1)-pair and from the (2, L)-pair, where L≠0, to the (1, L+1)-pair are demonstrated in <ref>.The results and conjectures mentioned above and the predictions for the other pairs are summarized in Table <ref>. Note that everything not mentioned in the above results is a conjecture. The following items briefly explain Table <ref>. * q-s appearing at the positions marked with “×" in the table are linearizable.Moreover, there exist q-Painlevé equations of A_i^(1)-surface type at the positions marked with “A_i^(1)", and there exist j th-order q-s at the positions marked with “j th".* The number k in the symbol “[k]" represents the number of parameters of an appearing q- including an independent variable but excluding a shift parameter.For example, let us consider the system (<ref>) obtained from the (2N+1,1)-pair, where N≥ 1.The parameters involved in the system are {a_1,…,a_2N,b,c,p}.Thus, the number of parameters, including the independent variable “b" but excluding the shift parameter “p", is 2N+2.* Moving to the right in two steps (i.e., (n,L)→ (n+2,L)) increases the order of q-s by two and also increases the number of their parameters by two.On the other hand, moving downward in two steps (i.e., (n,L)→ (n,L+2)) increases the order of appearing q-s by two, but the number of their parameters remains unchanged.* An appropriate limit operation induces degeneration in the bottom left direction of the table (that is, (n,L)→ (n-1,L+1)).§.§ A conjecture on linear problems for q-s obtained from the (n,L)-pairIn this subsection, we describe a conjecture concerning a linear problem of Painlevé type q-s derived from the (n, L)-pair, where L≥1.We provide an explanation for the pair (<ref>) and (<ref>), but the same applies to the pair (<ref>) and (<ref>).Let L≥1. In <cit.>, a Lax representation of the system (<ref>) is given by ϕ_i=[μ U_i^(i)(l_i)U -U_i^2(l_0⋯ n);(l_0⋯ n)U^2 -μ U_i^(i)(l_i)U ].ϕ,i=1,…,n,ϕ_0=[-μ(l_0)U_0U -(l_0⋯ n)^2U_0^2; 1(l_0⋯ n)^2U^20 ].ϕ, where μ∈ is a spectral variable, ϕ=ϕ(l_1,…,l_n,l_0) is a column vector of length two and ϕ_i=ϕ|_ l_i→ l_i+1.Indeed, we can easily verify that the compatibility conditions(ϕ_i)_j=(ϕ_j)_i ,0≤ i<j≤ n,provide the system (<ref>). Imposing condition (<ref>) into system (<ref>), we obtain the following conditions:^(1)(l_1)^(1)(l_1+1)=⋯=^(n)(l_n)^(n)(l_n+1)=(l_0+L)(l_0)=q,where q∈. Note that if (n+L) is odd, then the following additional condition is required._0=1.Define the shift operators T_i, i=0,…,n,by the actions on the parameters {^(1)(l),…,^(n)(l),(l),_0}_l∈ and μ, the variable U=U(l_1,…,l_n,l_0) and the vector ϕ=ϕ(l_1,…,l_n,l_0) as follows: T_i(^(j)(l))=^(i)(l+1) ifj=i, ^(j)(l) otherwise,T_i((l))=(l+1) ifi=0, (l) otherwise,T_i(_0)=1_0,T_i(μ)=μ,T_i(U)=U_i,T_i(ϕ)=ϕ_i. Moreover, define the parameter x and the operator T_x byx=μ∏_k=1^n^(k)(0)^1/n,T_x=T_1T_2⋯ T_n T_0^L.Then, x and T_x can be regarded as the spectral variable and spectral operator of a Lax pair of a Painlevé type q- obtained from the (n,L)-pair, respectively. The operator T_x acts on x as follows:T_x(x)=qx.On the other hand, the Painlevé parameters (an independent variable and parameters of a Painlevé type q- derived from the (n,L)-pair) and the Painlevé variable (a dependent variable of the Painlevé type q-) are invariant under the action of T_x. In general, the Painlevé parameters are expressed as rational functions of the parameters {^(1)(l),…,^(n)(l),(l),_0}_l∈ (or their rational number powers), and the Painlevé variable is expressed as a rational function of the U-variable. A spectral direction of a linear problem for Painlevé type q-s derived from the (n,L)-pair takes the following form:T_x(ϕ)=[ ∗ x ∗; ∗ 0 ]…[ ∗ x ∗; ∗ 0 ].[ ∗ x ∗; ∗ ∗ x ]…[ ∗ x ∗; ∗ ∗ x ].ϕ,where the coefficient matrices above are given by the products of L matrices of the form:[ ∗ x ∗; ∗ 0 ]and n matrices of the form[ ∗ x ∗; ∗ ∗ x ].Here, the entries denoted by “∗" are expressed as rational functions of the parameters {^(1)(l),…,^(n)(l),(l),_0}_l∈ and the U-variables, and the entries remain invariant under the action of T_x.Expressing the “∗" entries solely in terms of the Painlevé parameters and variables may require careful gauge transformations of vector ϕ.For specific examples, see <cit.> for the (n,1)-pair.(See also <cit.> for the (3,1)-pair.)§.§ Painlevé type q-s obtained from the (1,L)-pairIn this subsection, we show that the (1,L)-pair gives the Painlevé type q-s in <cit.>. We consider the cases separately based on the parity of L. §.§.§ The case L is evenConsider the (1,2h)-pair, where h∈_≥1. In this case, condition (<ref>) is equivalent to expressing U(l_1,l_0) as:U(l_1,l_0)=U(l_0-2hl_1).Substituting (<ref>) into Equation (<ref>), we obtainU(l+2h+1)U(l)+U(l+1)U(l+2h)+p ^(1)(0) (l)=0,with the conditions of the parameters(l_1)(l_1+1)=(l_0+2h)(l_0)=p,_0=1,where p∈. Settingx_l=U(l+1)U(l),from (<ref>) we obtainx_l+2hx_l=-1∏_k=1^2h-1x_l+k(1∏_k=1^2h-1x_l+k+p ^(1)(0) (l)).Sety_l=p ^(1)(0) (l)U(l+2h)U(l+1)=p ^(1)(0) (l)∏_k=1^2h-1x_l+k.Then, from Equation (<ref>) the following relation holds:x_l+2hx_l=-p^2 ^(1)(0)^2 (l)^2y_l+1y_l^2.Therefore, we obtainy_l+2hy_l =p^3 ^(1)(0)^2 (l)^2∏_k=1^2h-1x_l+2h+kx_l+k=p^3 ^(1)(0)^2 (l)^2(∏_k=1^2h-1-p^2 ^(1)(0)^2 (l+k)^2y_l+k+1y_l+k^2)=p^4h+3^(1)(0)^4h+2(∏_k=0^2h-1(l+k)^2)(∏_k=1^2h-1y_l+k+1y_l+k^2).Finally, defining q and t_l byq=p^2,t_l=q^4h+3^(1)(0)^4h+2(∏_k=0^2h-1(l+k)^2),we obtain the following one kind of Painlevé type q-s in <cit.>:y_l+2hy_l=t_l(∏_k=1^2h-1y_l+k+1y_l+k^2),where t_l=q^lt_0.Equation (<ref>) has an extended affine Weyl group symmetry of A_1^(1)-type (see <cit.> for details). When h=1, Equation (<ref>) is equivalent to a q-Painlevé I equation of A_7^(1)-surface type<cit.> (see <cit.> for details).§.§.§ The case L is oddConsider the (1,2h+1)-pair, where h∈_≥1. In this case, condition (<ref>) is equivalent to expressing U(l_1,l_0) as:U(l_1,l_0)=U(l_0-(2h+1)l_1).Substituting (<ref>) into Equation (<ref>), we obtainU(l+2h+2)U(l)+(l)^4U(l+1)U(l+2h+1)+q^(1)(0) (l)(l)^3=0,with the conditions of the parameters(l_1)(l_1+1)=(l_0+2h+1)(l_0)=q,where q∈. Settingx_l=U(l+2)U(l),from (<ref>) we obtainx_l+2hx_l=-(l)^3∏_k=1^h-1x_l+2k((l)∏_k=0^h-1x_l+2k+1+q^(1)(0) (l)).Sety_l=q^(1)(0) (l-1)(l)U(l+2h)U(l)=q^(1)(0) (l-1)(l)∏_k=0^h-1x_l+2k.Then, from Equation (<ref>) the following relation holds:x_l+2h=-q^2^(1)(0)^2 (l-1)(l)(l)^4y_l+1+1y_ly_l+1.Therefore, we obtainy_l+2h =q^(1)(0) (l+2h-1)(l)∏_k=0^h-1x_l+2h+2k=(-1)^h q^2h^(1)(0)^2h+1(l)^4h+1(∏_k=0^2h(l+k))(∏_k=0^h-1y_l+2k+1+1y_l+2ky_l+2k+1)=(-1)^h q^2h^(1)(0)^2h+1(l)^4h+1(∏_k=0^2h(l+k))∏_k=0^h-1(y_l+2k+1+1)y_l(∏_k=1^2h-1y_l+k).Finally, defining c and t_l byc=_0^4h+1,t_l=(-1)^h q^2h^(1)(0)^2h+1(∏_k=0^2h(l+k)),we obtain the following another kind of Painlevé type q-s in <cit.>:y_l+2hy_l=c^(-1)^lt_l∏_k=0^h-1(y_l+2k+1+1)∏_k=1^2h-1y_l+k,where t_l=q^lt_0.Equation (<ref>) has an extended affine Weyl group symmetry of (A_1× A_1)^(1)-type (see <cit.> for details). When h=1, Equation (<ref>) is equivalent to a q-Painlevé II equation of A_6^(1)-surface type<cit.> (see <cit.> for details). §.§ Degeneration of the (3, L)- and (2, L)-pairs through limit operationsIn this subsection, we demonstrate degeneration from the (3, L)-pair, where L≥1, to the (2, L+1)-pair and from the (2, L)-pair, where L≥1, to the (1, L+1)-pair through limit operations. §.§.§ Degeneration from the (3, L)-pair to the (2, L+1)-pairLet L≥1. Then, the (3, L)-pair is given by the system of s (l_0⋯ 3)^2U_12U=^(1)(l_1) U_1-^(2)(l_2) U_2^(2)(l_2) U_1-^(1)(l_1) U_2,(l_0⋯ 3)^2U_13U=^(1)(l_1) U_1-^(3)(l_3) U_3^(3)(l_3) U_1-^(1)(l_1) U_3,(l_0⋯ 3)^2U_23U=^(2)(l_2) U_2-^(3)(l_3) U_3^(3)(l_3) U_2-^(2)(l_2) U_3,U_0 1U+(l_0⋯ 3)^4U_0U_1+^(1)(l_1) (l_0) (l_0⋯ 3)=0,U_0 2U+(l_0⋯ 3)^4U_0U_2+^(2)(l_2) (l_0) (l_0⋯ 3)=0,U_0 3U+(l_0⋯ 3)^4U_0U_3+^(3)(l_3) (l_0) (l_0⋯ 3)=0, where U=U(l_1,l_2,l_3,l_0), and the periodic condition U(l_1+1,l_2+1,l_3+1,l_0+L)=U(l_1,l_2,l_3,l_0).Due to the periodic condition (<ref>), the parameters satisfy the following conditions:^(1)(l_1)^(1)(l_1+1)=^(2)(l_2)^(2)(l_2+1)=^(3)(l_3)^(3)(l_3+1)=(l_0+L)(l_0)=q,where q∈, which give^(1)(l_1)=q^-l_1^(1)(0),^(2)(l_2)=q^-l_2^(2)(0),^(3)(l_3)=q^-l_3^(3)(0).If L is even, then the following additional condition is needed:_0=1.Therefore, regardless of the parity of L, the following relation holds:(l+L)=1(l). In the following, we show that pair (<ref>) and (<ref>) is reduced to the (2, L+1)-pair through a limit operation. Condition (<ref>) is equivalent to expressing U(l_1,l_2,l_3,l_0) asU(l_1,l_2,l_3,l_0)=ω(l_1-l_3,l_2-l_3,l_0-Ll_3).Set ω(l_1,l_2,l_0)=ξ^-l_0A(l_0)(l_0+l_1+l_2)^3(2l_0-l_1-l_2)/2L u(l_1,l_2,l_0),^(1)(0)=ξ^L a^(1)(0),^(2)(0)=ξ^L a^(2)(0),(l_0)=ξ^-L-1 K(l_0), where ξ is a positive real number and A(l)∈ is a function satisfyingA(l+L+1)=-qK(l) ^(3)(0) A(l).Then, taking ξ→ 0, we obtain a pair of the system of s (l_0+l_1+l_2)^2u_12u=a^(1)(l_1) u_1-a^(2)(l_2) u_2a^(2)(l_2) u_1-a^(1)(l_1) u_2,u_0 1u+(l_0+l_1+l_2)^4u_0u_1+a^(1)(l_1) k(l_0) (l_0+l_1+l_2)=0,u_0 2u+(l_0+l_1+l_2)^4u_0u_2+a^(2)(l_2) k(l_0) (l_0+l_1+l_2)=0, where u=u(l_1,l_2,l_0),k(l):=A(l)K(l)A(l+1),(l):=(l)^2L-3/2L,and the periodic condition u(l_1+1,l_2+1,l_0+L+1)=u(l_1,l_2,l_0),which is equivalent to the (2,L+1)-pair. Indeed, Equation (<ref>) is reduced to Equation (<ref>), Equation (<ref>) is reduced to Equation (<ref>), Equation (<ref>) is reduced to Equation (<ref>),and Equation (<ref>) is reduced to condition (<ref>). Moreover, the reduced equations obtained from Equations (<ref>) and (<ref>) can be derived fromEquation (<ref>) with condition (<ref>) and Equations (<ref>) with condition (<ref>), respectively. Note that the parameters in system (<ref>) satisfy the following relations:a^(1)(l_1)a^(1)(l_1+1)=a^(2)(l_2)a^(2)(l_2+1)=k(l_0+L+1)k(l_0)=q,(l+1)=1(l).§.§.§ Degeneration from the (2, L)-pair to the (1, L+1)-pairLet L≥1. Then, the (2, L)-pair is given by the system of s (l_0⋯ 2)^2U_12U=^(1)(l_1) U_1-^(2)(l_2) U_2^(2)(l_2) U_1-^(1)(l_1) U_2,U_0 1U+(l_0⋯ 2)^4U_0U_1+^(1)(l_1) (l_0) (l_0⋯ 2)=0,U_0 2U+(l_0⋯ 2)^4U_0U_2+^(2)(l_2) (l_0) (l_0⋯ 2)=0, where U=U(l_1,l_2,l_0), and the periodic condition U(l_1+1,l_2+1,l_0+L)=U(l_1,l_2,l_0).Due to the periodic condition (<ref>), the parameters satisfy the following conditions:^(1)(l_1)^(1)(l_1+1)=^(2)(l_2)^(2)(l_2+1)=(l_0+L)(l_0)=q,where q∈, which give^(1)(l_1)=q^-l_1^(1)(0),^(2)(l_2)=q^-l_2^(2)(0).If L is odd, then the following additional condition is needed:_0=1.Therefore, regardless of the parity of L, the following relation holds:(l+L)=(l). In the following, we show that pair (<ref>) and (<ref>) is reduced to the (1, L+1)-pair through a limit operation. Condition (<ref>) is equivalent to expressing U(l_1,l_2,l_0) as:U(l_1,l_2,l_0)=ω(l_1-l_2,l_0-Ll_2).Set ω(l_1,l_0)=ξ^-l_0A(l_0)(l_0+l_1)^3(2l_0-l_1)/2L+1 u(l_1,l_0),^(1)(0)=ξ^L a^(1)(0),(l_0)=ξ^-L-1 K(l_0), where ξ is a positive real number and A(l)∈ is a function satisfyingA(l+L+1)=-qK(l) ^(2)(0) A(l).Then, taking ξ→ 0, we obtain the following pair:u_0 1u+(l_0+l_1)^4u_0u_1+a^(1)(l_1) k(l_0) (l_0+l_1)=0,u(l_1+1,l_0+L+1)=u(l_1,l_0),where u=u(l_1,l_0),k(l):=A(l)K(l)A(l+1),(l):=(l)^2(L-1)/2L+1,which is equivalent to the (1,L+1)-pair. Indeed, Equations (<ref>) and(<ref>) are reduced to Equations (<ref>) and (<ref>), respectively. Moreover, the reduced equation obtained from Equation (<ref>) can be derived fromEquation (<ref>) under condition (<ref>). Note that the parameters in Equation (<ref>) satisfy the following relations:a^(1)(l_1)a^(1)(l_1+1)=k(l_0+L+1)k(l_0)=q,(l+1)=1(l). ' ' 10AS1977:PhysRevLett.38.1103 M. J. Ablowitz and H. Segur. Exact Linearization of a Painlevé Transcendent. Phys. Rev. Lett., 38:1103–1106, May 1977.ABS2003:MR1962121 V. E. Adler, A. I. Bobenko, and Y. B. Suris. Classification of integrable equations on quad-graphs. The consistency approach. Comm. Math. Phys., 233(3):513–543, 2003.ABS2009:MR2503862 V. E. Adler, A. I. Bobenko, and Y. B. Suris. Discrete nonlinear hyperbolic equations: classification of integrable cases. Funktsional. Anal. i Prilozhen., 43(1):3–21, 2009.AJT2020:zbMATH07212273 H. Alrashdi, N. Joshi, and D. T. Tran. Hierarchies of q-discrete Painlevé equations. J. Nonlinear Math. Phys., 27(3):453–477, 2020.BGM2018:zbMATH06868180 M. Bershtein, P. Gavrylenko, and A. Marshakov. Cluster integrable systems, q-Painlevé equations and their quantization. J. High Energy Phys., 2018(2):34, 2018. Id/No 77.BS2002:MR1890049 A. I. Bobenko and Y. B. Suris. Integrable systems on quad-graphs. Int. Math. Res. Not. IMRN, (11):573–611, 2002.BollR2011:MR2846098 R. Boll. Classification of 3D consistent quad-equations. J. Nonlinear Math. Phys., 18(3):337–365, 2011.BollR:thesis R. Boll. Classification and Lagrangian Structure of 3D Consistent Quad-Equations. Doctoral Thesis, Technische Universität Berlin, 2012.BollR2012:MR3010833 R. Boll. Corrigendum: Classification of 3D consistent quad-equations. J. Nonlinear Math. Phys., 19(4):1292001, 3, 2012.FJN2008:MR2425981 C. M. Field, N. Joshi, and F. W. Nijhoff. q-difference equations of KdV type and Chazy-type second-degree difference equations. J. Phys. A, 41(33):332005, 13, 2008.FuchsR1905:quelques R. Fuchs. Sur quelques équations différentielles linéaires du second ordre. Comptes Rendus de l'Académie des Sciences Paris, 141(1):555–558, 1905.GambierB1910:MR1555055 B. Gambier. Sur les équations différentielles du second ordre et du premier degré dont l'intégrale générale est a points critiques fixes. Acta Math., 33(1):1–55, 1910.GR2000:zbMATH01498348 B. Grammaticos and A. Ramani. The hunting for the discrete Painlevé equations. Regul. Chaotic Dyn., 5(1):53–66, 2000.GRP1991:MR1125950 B. Grammaticos, A. Ramani, and V. Papageorgiou. Do integrable mappings have the Painlevé property? Phys. Rev. Lett., 67(14):1825–1828, 1991.GRSWC2005:MR2117991 B. Grammaticos, A. Ramani, J. Satsuma, R. Willox, and A. S. Carstea. Reductions of integrable lattices. J. Nonlinear Math. Phys., 12(suppl. 1):363–371, 2005.HayM2007:MR2371129 M. Hay. Hierarchies of nonlinear integrable q-difference equations from series of Lax pairs. J. Phys. A, 40(34):10457–10471, 2007.HHJN2007:MR2303490 M. Hay, J. Hietarinta, N. Joshi, and F. W. Nijhoff. A Lax pair for a lattice modified KdV equation, reductions to q-Painlevé equations and associated Lax pairs. J. Phys. A, 40(2):F61–F73, 2007.HHNS2015:MR3317164 M. Hay, P. Howes, N. Nakazono, and Y. Shi. A systematic approach to reductions of type-Q ABS equations. J. Phys. A, 48(9):095201, 24, 2015.HJN2016:MR3587455 J. Hietarinta, N. Joshi, and F. W. Nijhoff. Discrete systems and integrability. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2016.HI2014:zbMATH06381200 A. N. W. Hone and R. Inoue. Discrete Painlevé equations from Y-systems. J. Phys. A, Math. Theor., 47(47):26, 2014. Id/No 474007.IIKNS2010:zbMATH05704436 R. Inoue, O. Iyama, A. Kuniba, T. Nakanishi, and J. Suzuki. Periodicities of T-systems and Y-systems. Nagoya Math. J., 197:59–174, 2010.JS1996:MR1403067 M. Jimbo and H. Sakai. A q-analog of the sixth Painlevé equation. Lett. Math. Phys., 38(2):145–154, 1996.JN2016:MR3597921 N. Joshi and N. Nakazono. Lax pairs of discrete Painlevé equations: (A_2+A_1)^(1) case. Proc. R. Soc. A., 472(2196):20160696, 14, 2016.JNS2014:MR3291391 N. Joshi, N. Nakazono, and Y. Shi. Geometric reductions of ABS equations on an n-cube to discrete Painlevé systems. J. Phys. A, 47(50):505201, 16, 2014.JNS2015:MR3403054 N. Joshi, N. Nakazono, and Y. Shi. Lattice equations arising from discrete Painlevé systems. I. (A_2 + A_1)^(1) and (A_1 + A_1^')^(1) cases. J. Math. Phys., 56(9):092705, 25, 2015.JNS2016:MR3584386 N. Joshi, N. Nakazono, and Y. Shi. Lattice equations arising from discrete Painlevé systems: II. A^(1)_4 case. J. Phys. A, 49(49):495201, 39, 2016.KN2015:MR3340349 K. Kajiwara and N. Nakazono. Hypergeometric solutions to the symmetric q-Painlevé equations. Int. Math. Res. Not. IMRN, (4):1101–1140, 2015.KNY2002:MR1917133 K. Kajiwara, M. Noumi, and Y. Yamada. Discrete dynamical systems with W(A_m-1^(1)× A_n-1^(1)) symmetry. Lett. Math. Phys., 60(3):211–219, 2002.KNY2002:MR1958118 K. Kajiwara, M. Noumi, and Y. Yamada. q-Painlevé systems arising from q-KP hierarchy. Lett. Math. Phys., 62(3):259–268, 2002.KNY2017:MR3609039 K. Kajiwara, M. Noumi, and Y. Yamada. Geometric aspects of Painlevé equations. J. Phys. A, 50(7):073001, 164, 2017.KTGR2000:MR1789477 M. D. Kruskal, K. M. Tamizhmani, B. Grammaticos, and A. Ramani. Asymmetric discrete Painlevé equations. Regul. Chaotic Dyn., 5(3):273–280, 2000.MOT2018:AmAnAg T. Masuda, N. Okubo, and T. Tsuda. Birational Weyl group actions via mutation combinatorics in cluster algebras. In Aspects of Combinatorial Representaion Theory, RIMS Kôkyûroku, 2127, pages 20–38 (in Japanese). Res. Inst. Math. Sci. (RIMS), Kyoto, 2018.MOT2021:Cluster T. Masuda, N. Okubo, and T. Tsuda. Cluster algebras and higher order generalizations of the q-Painleve equations of type A_7^(1) and A_6^(1). In Mathematical structures of integrable systems, its deepening and expansion, RIMS Kôkyûroku Bessatsu, B87, pages 149–163. Res. Inst. Math. Sci. (RIMS), Kyoto, 2021.MOT2023:AmAnAgArxiv T. Masuda, N. Okubo, and T. Tsuda. Birational Weyl group actions via mutation combinatorics in cluster algebras. arXiv preprint arXiv:2303.06704, 2023.NY2018:zbMATH06876428 H. Nagao and Y. Yamada. Variations of the q-Garnier system. J. Phys. A, Math. Theor., 51(13):19, 2018. Id/No 135204.NakazonoN2016:MR3503803 N. Nakazono. Hypergeometric τ Functions of the q-Painlevé Systems of Types A_4^(1) and (A_1+A_1')^(1). SIGMA Symmetry Integrability Geom. Methods Appl., 12:051, 23 pages, 2016.nakazono2023higerA4A6 N. Nakazono. Higher-order generalizations of the A_6^(1)- and A_4^(1)-surface type q-Painlevé equations. Physica Scripta, 98(11):115204, 2023.NijhoffFW2002:MR1912127 F. W. Nijhoff. Lax pair for the Adler (lattice Krichever-Novikov) system. Phys. Lett. A, 297(1-2):49–58, 2002.NC1995:MR1329559 F. W. Nijhoff and H. W. Capel. The discrete Korteweg-de Vries equation. Acta Appl. Math., 39(1-3):133–158, 1995. KdV '95 (Amsterdam, 1995).NQC1983:MR719638 F. W. Nijhoff, G. R. W. Quispel, and H. W. Capel. Direct linearization of nonlinear difference-difference equations. Phys. Lett. A, 97(4):125–128, 1983.NW2001:MR1869690 F. W. Nijhoff and A. J. Walker. The discrete and continuous Painlevé VI hierarchy and the Garnier systems. Glasg. Math. J., 43A:109–123, 2001. Integrable systems: linear and nonlinear dynamics (Islay, 1999).NobeA2016:zbMATH06618416 A. Nobe. Mutations of the cluster algebra of type A_1^(1) and the periodic discrete Toda lattice. J. Phys. A, Math. Theor., 49(28):18, 2016. Id/No 285201.book_NoumiM2004:MR2044201 M. Noumi. Painlevé equations through symmetry, volume 223 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 2004. Translated from the 2000 Japanese original by the author.OKSO2006:MR2277519 Y. Ohyama, H. Kawamuko, H. Sakai, and K. Okamoto. Studies on the Painlevé equations. V. Third Painlevé equations of special type P_ III(D_7) and P_ III(D_8). J. Math. Sci. Univ. Tokyo, 13(2):145–204, 2006.OkamotoK1979:MR614694 K. Okamoto. Sur les feuilletages associés aux équations du second ordre à points critiques fixes de P. Painlevé. Japan. J. Math. (N.S.), 5(1):1–79, 1979.OkuboN2015:zbMATH06499648 N. Okubo. Bilinear equations and q-discrete Painlevé equations satisfied by variables and coefficients in cluster algebras. J. Phys. A, Math. Theor., 48(35):25, 2015. Id/No 355201.Okubo2017:arxiv1704.05403 N. Okubo. Co-primeness preserving higher dimensional extension of q-discrete Painlevé I, II equations. arXiv preprint arXiv:1704.05403, 2017.OS2020:10.1093/imrn/rnaa283 N. Okubo and T. Suzuki. Generalized q-Painlevé VI Systems of Type (A_2n+1+A_1+A_1)^(1) Arising From Cluster Algebra. International Mathematics Research Notices, 2022(9):6561–6607, 12 2020.OrmerodCM2012:MR2997166 C. M. Ormerod. Reductions of lattice mKdV to q-P_ VI. Phys. Lett. A, 376(45):2855–2859, 2012.OVHQ2014:MR3215839 C. M. Ormerod, P. H. van der Kamp, J. Hietarinta, and G. R. W. Quispel. Twisted reductions of integrable lattice equations, and their Lax representations. Nonlinearity, 27(6):1367–1390, 2014.OVQ2013:MR3030178 C. M. Ormerod, P. H. van der Kamp, and G. R. W. Quispel. Discrete Painlevé equations and their Lax pairs as reductions of integrable lattice equations. J. Phys. A, 46(9):095204, 22, 2013.PainleveP1900:zbMATH02665472 P. Painlevé. Mémoire sur les équations différentielles dont l'intégrale générale est uniforme. Bull. Soc. Math. Fr., 28:201–261, 1900.PainleveP1902:MR1554937 P. Painlevé. Sur les équations différentielles du second ordre et d'ordre supérieur dont l'intégrale générale est uniforme. Acta Math., 25(1):1–85, 1902.PainleveP1907:zbMATH02647172 P. Painlevé. Sur les équations différentielles du second ordre à points critiques fixes. C. R. Acad. Sci., Paris, 143:1111–1117, 1907.PNGR1992:MR1162062 V. G. Papageorgiou, F. W. Nijhoff, B. Grammaticos, and A. Ramani. Isomonodromic deformation problems for discrete analogues of Painlevé equations. Phys. Lett. A, 164(1):57–64, 1992.RG1996:MR1399286 A. Ramani and B. Grammaticos. Discrete Painlevé equations: coalescences, limits and degeneracies. Phys. A, 228(1-4):160–171, 1996.RGTT2001:MR1838017 A. Ramani, B. Grammaticos, T. Tamizhmani, and K. M. Tamizhmani. Special function solutions of the discrete Painlevé equations. Comput. Math. Appl., 42(3-5):603–614, 2001. Advances in difference equations, III.SakaiH2001:MR1882403 H. Sakai. Rational surfaces associated with affine root systems and geometry of the Painlevé equations. Comm. Math. Phys., 220(1):165–229, 2001.TakenawaT2003:MR1996297 T. Takenawa. Weyl group symmetry of type D^(1)_5 in the q-Painlevé V equation. Funkcial. Ekvac., 46(1):173–186, 2003.TGCR2004:MR2058894 K. M. Tamizhmani, B. Grammaticos, A. S. Carstea, and A. Ramani. The q-discrete Painlevé IV equations and their properties. Regul. Chaotic Dyn., 9(1):13–20, 2004.TsudaT2006:MR2247459 T. Tsuda. Tropical Weyl group action via point configurations and τ-functions of the q-Painlevé equations. Lett. Math. Phys., 77(1):21–30, 2006.TsudaT2010:MR2563787 T. Tsuda. On an integrable system of q-difference equations satisfied by the universal characters: its Lax formalism and an application to q-Painlevé equations. Comm. Math. Phys., 293(2):347–359, 2010.WalkerAJ:thesis A. Walker. Similarity reductions and integrable lattice equations. Ph.D. Thesis, University of Leeds, 2001.WMTB1976:PhysRevB.13.316 T. T. Wu, B. M. McCoy, C. A. Tracy, and E. Barouch. Spin-spin correlation functions for the two-dimensional Ising model: Exact theory in the scaling region. Phys. Rev. B, 13:316–374, Jan 1976. | http://arxiv.org/abs/2312.16431v2 | {
"authors": [
"Nobutaka Nakazono"
],
"categories": [
"nlin.SI",
"math-ph",
"math.MP"
],
"primary_category": "nlin.SI",
"published": "20231227063522",
"title": "A higher-order generalization of an $A_4^{(1)}$-surface type $q$-Painlevé equation with $\\widetilde{W}\\left((A_{2N}\\rtimes A_1)^{(1)}\\times A_1^{(1)}\\right)$ symmetry"
} |
thmTheoremlemma[thm]Lemmacor[thm]Corollaryprop[thm]Propositiondefinitionrem[thm]Remark braids [email protected] of Physics and Astronomy, The University of Tennessee, Knoxville, TN 37996-1200, [email protected] of Physics and Astronomy, The University of Tennessee, Knoxville, TN 37996-1200, [email protected] Department of Industrial and Systems Engineering, The University of Tennessee, Knoxville, Tennessee37996-2315, [email protected] of Physics and Astronomy, The University of Tennessee, Knoxville, TN 37996-1200, USA We use Quantum Imaginary Time Evolution (QITE) to solve polynomial unconstrained binary optimization (PUBO) problems. We show that a linear Ansatz yields good results for a wide range of PUBO problems, often outperforming standard classical methods, such as the Goemans-Williamson (GW) algorithm. We obtain numerical results for the Low Autocorrelation Binary Sequences (LABS) and weighted MaxCut combinatorial optimization problems, thus extending an earlier demonstration of successful application of QITE on MaxCut for unweighted graphs.We find the performance of QITE on the LABS problem with a separable Ansatzcomparable with p=10 QAOA, and do not see a significant advantage with an entangling Ansatz. On weighted MaxCut, QITE with a separable Ansatzoften outperforms the GW algorithm on graphs up to 150 vertices.Combinatorial optimization with quantum imaginary time evolutionGeorge Siopsis January 14, 2024 ================================================================= § INTRODUCTION Many important combinatorial optimization problems can be mapped onto an Ising-type Hamiltonian. Solving a generic Ising model is an NP-hard problem <cit.>. It would be interesting to witness quantum algorithms outperform their classical counterparts in such problems, but demonstrating their superiority with NISQ devices has proved to be challenging. One of the quantum algorithms extensively studied on NISQ hardware is the Quantum Approximate Optimization Algorithm (QAOA) <cit.>. Another interesting approach was recently proposed that performs better on NISQ devices based on a quantum-enhanced classical optimization algorithm <cit.>. After mapping onto a Hamiltonian, the combinatorial optimization problem reduces to finding the ground state energy. Various methods have been employed to compute the ground state of a system, such as adiabatic evolution, quantum annealing and quantum imaginary-time evolution (QITE). QITE has been widely employed in the analysis of quantum many-body systems, serving as a valuable technique for diverse purposes, including computation of energy levels of multi-particle systems and generation of states at finite temperatures <cit.>. Given that evolution in imaginary time effectively reduces the system to zero temperature <cit.>, the ground state can be precisely prepared using QITE without the need for variational optimization. Nevertheless, in practice approximations are necessary due to limited computational resources, prompting the need for an approach involving variational calculus.The progression in imaginary time τ is executed through the non-unitary operator U(τ) = e^-τℋ, where ℋ denotes the Hamiltonian of the system of interest. Starting with an initial state featuring a non-zero overlap with the system's ground state, the evolved state converges towards the ground state as τ approaches infinity. Access to excited states is also attainable by selecting an appropriate initial state, one that is orthogonal to the ground state. Simulating QITE on a quantum computer is not straightforward, because U(τ) is a non-unitary operator. Motta et al. <cit.> proposed a QITE algorithm that dispensed with the need for classical optimization, distinguishing it from QAOA <cit.>. Furthermore, it exhibited an advantage over a variational quantum eigensolver (VQE) by not employing ancilla qubits. The approach found practical application in the quantum computation of chemical energy levels on NISQ hardware <cit.> and in the simulation of open quantum systems <cit.>. The impact of noise on QITE in NISQ hardware was addressed in <cit.> using error mitigation and randomized compiling. Error mitigation was also addressed with a different method based on deep reinforcement learning <cit.>. The reduction of quantum circuit depth for QITE through a nonlocal approximation was discussed in <cit.>. The implementation of real and imaginary time evolution using compressed quantum circuits on NISQ hardware was demonstrated in <cit.>.The QITE algorithm initiates by representing the Hamiltonian in local terms and employs Trotterization to implement U(τ). Subsequently, the non-unitary evolution for a short imaginary-time interval is approximately implemented with a unitary operator. This unitary operator is expressed in terms of Pauli spin operators, with coefficients determined from measurements on quantum hardware. This approach was employed in Ref. <cit.> to solve combinatorial optimization problems. For the approximate unitary operator, an Ansatz was used that was linear in the Pauli operators and therefore its implementation required no entanglement of qubits. The method was applied to the MaxCut problem on thousands of randomly selected unweighted graphs with up to fifty vertices. Results compared favorably with the performance of classical algorithms, such as the greedy <cit.> and Goemans–Williamson (GW) <cit.> algorithms. The overlap of the final state of the QITE algorithm with the ground state was also discussed as a performance metric, which is a quantum feature not shared by other classical algorithms. These results indicate that the linear QITE method is efficient and quantum advantage due to entanglement is likely to be found only at larger graphs (N≳ 100) requiring deep quantum circuits which cannot currently be handled by NISQ hardware. Given the demonstrated success of QITE based on a linear Ansatzfor MaxCut problems on unweighted<cit.>, it is crucial to assess its efficacy in more general polynomial unconstrained binary optimization (PUBO) problems, which is a more general class of problems containing the more popular quadratic unconstrained binary optimization (QUBO) problems. This analysis is important for assessing the utility of NISQ devices. Here we extend the results of Ref. <cit.> to solve PUBO problems using QITE. We concentrate on two problems: (a) the MaxCut problem on weighted graphs, and (b) the Low Autocorrelation Binary Sequences (LABS) problem. In both cases we map the problem onto an Ising-type Hamiltonian. In the MaxCut problem on weighted graphs, we tested QITE with a separable linear Ansatz on graphs up to 150 vertices (qubits), and found that QITE often outperformed the GW algorithm, attaining an average AR 0.95 for N>100 vertices. Even as the energy gaps between the ground and first excited state became small, QITE was able to find a state with an equal or lower energy than GW. It should be noted that QITE with a separable Ansatz can be simulated efficiently classically. Our results indicate that quantum advantage in combinatorial optimization problems will be hard to witness on NISQ devices.In the LABS problem, complexity grows quickly for large N. Optimal solutions are only known for N≤66, so LABS is a promising candidate for quantum advantage as no classical solutions currently exist <cit.>. The relevant regime where classical heuristics produce poor solutions is N≈ 200, so the number of qubits required for a solution to a classically intractable problem is on the order of hundreds of qubits.In Ref. <cit.>,they obtained a scaling advantage over classical algorithms for problem size up to 40 qubits using QAOA up to level p=40, obtaining exact solutions. We applied linear Ansatz QITE and obtained probability of measuring the ground state, P(GS), comparable with level p=10 QAOA results at relatively low circuit depth and hardware requirements. We also investigated the performance of an entangling quadratic Ansatz with QITE, but found no significant advantage over the linear Ansatz QITE for N<10. Our discussion is organized as follows. In Section <ref>, we discuss the QITE procedure and its hardware requirements. In Section <ref>, we define the weighted MaxCut problem and give results on graphs up to 150 vertices (qubits). In Section <ref>, we define the LABS problem and discuss how we solve it with QITE, and present results for problem size up to 28 qubits. Finally, in Section <ref>, we summarize our results and discuss further research directions.§ METHODIn this Section, we review the QITE procedure introduced in Ref. <cit.>, and discuss the hardware requirements of the algorithm. To implement QITE, we perform evolution in small imaginary time intervals τ.In the zero temperature limit, the ground state of the Hamiltonian ℋ for any state |Ψ⟩ is given by|Ω⟩ = lim_β→∞e^-βℋ|Ψ⟩/e^-βℋ|Ψ⟩as long as ⟨Ω |Ψ|⟩0. The Hamiltonian is defined on a graph G ≡ (V,E) where V (E) is the set of vertices (edges). The system consists of qubits lying on the vertices of the graph G.We initialize the system in the state |Ψ[0]⟩, which can be arbitrarily chosen, as long as it has finite overlap with the ground state of the system. Suppose that after s-1 steps we arrive at the state |Ψ[s-1]⟩. In the sth step, we approximate the evolution of |Ψ[s-1]⟩ in (small) imaginary time τ by the action of the unitary e^-iτ A[s], where A[s] is a Hermitian operator.Thus, after s steps, we arrive at the state |Ψ [s]⟩ = e^-iτ A[s]|Ψ[s-1]⟩ = ∏_s'=1^s e^-iτ A[s']|Ψ[0]⟩ For the unitary updates, we adopt the linear Ansatz A[s] = ∑_j∈ V a_j[s] Y_j This unitary update is optimal when the coefficients a_j[s] obey the linear system of equations <cit.>S·a = b , S_ij[s] = ⟨ Y_i Y_j | ⟩, b_j[s] = -i/2⟨ [ ℋ , Y_j] |⟩where all expectation values are evaluated with respect to the state |Ψ[s-1]⟩ obtained in the previous step. We choose the initial state to be the tensor product of eigenstates of X and Z, |Ψ[0]⟩ = ⊗_j=1^|V||s_j⟩ , |s_j⟩∈{|0⟩, |1⟩, |+⟩, |-⟩}where |±⟩ = 1/√(2) (|0⟩±|1⟩), introducing no entanglement. Consequently, we have S=𝕀, and therefore a=b.Since all unitary updates commute with each other, we may write the state after s steps as |Ψ[s]⟩ = e^-iτ𝒜[s] |Ψ[0]⟩ , 𝒜[s] = ∑_s'=1^s A[s'] In terms of hardware requirements, at each QITE step we need only measure b,which requires N measurements due to basis rotations. In the results shown in the following sections, the value of the small imaginary time parameter τ is chosen by increasing in steps of Δβ = τ/T, where T≪τ, starting from zero. At each increase, the cost function ⟨ℋ|$⟩ is measured and the process is continued until the cost function (energy) starts increasing. At that point, the previousτvalue is selected. We choose a maximumβ_maxand perform less thanβ_max/Δβmeasurements.Since the applications ofe^-ia_j[s]Y_jcommute, the circuit depth does not scale with the number of steps.The number of measurements scales linearly with the number of steps.§ WEIGHTED MAXCUT Here we define the weighted MaxCut problem and the technique of imaginary-time-dependent (ITD) edges that we use to improve convergence. We present results of graphs up to 150 vertices (qubits), and find that QITE sometimes outperforms the classical GW algorithm <cit.>.Given a graphG = (V, E)consisting of a set of vertices V and edgesEjoining the vertices, the unweighted MaxCut problem onGis the combinatorial optimization problem of partitioningVinto two disjoint sets such that the number of edges with endpoints in each set,C, is maximized (C = C_max). It can be formulated as a Hamiltonian ground-state problem by associating a qubit with every vertex inVand defining the Hamiltonianℋ =∑_(i,j)∈ EZ_iZ_jwhereZ_iis the PauliZmatrix acting on theith qubit. The ground state energyℰ_0ofℋis related toC_maxbyC_max=|E|-ℰ_0/2For the weighted MaxCut problem, the edges(i,j)∈ Eof the graphG^whave associated weightsw_ij, and the Hamiltonian is modified asℋ^w=∑_(i,j)∈ Ew_ijZ_iZ_jThe ground state energyℰ_0^wis related to the maximum cutC_max^wasC^w_max = ∑_(i,j)∈ E w_ij-ℰ_0^w/2We compare the ground state energy obtained from QITE to the ground state energy obtained bythe classical GW algorithm <cit.>. Since for large graph sizes we cannot guarantee that GW produces the ground state energy, we define the approximation ratio (AR) as the energy obtained by QITE divided by the lowest energy produced by GW. As in <cit.>, we use the method of ITD edges to improve convergence. This is done by interpolating between a Hamiltonian that corresponds to a subgraph ofG^wandℋusing an ITD Hamiltonianℋ^w[s]given by:ℋ^w[s]=∑_(i,j)∈ E w_ij[s]Z_iZ_jA subgraph is selected for which the corresponding weights vanish initially, and then are increased with each stepsuntilw_ij[s]→ w_ijfor sufficiently many steps andℋ^w[s]→ℋ^w.We restrict our focus toN-vertex weighted Newman-Watts-Strogatz (NWS) graphs withk=4andp=0.5, where the weights are selected from the uniform random distribution(0,1]. We apply the QITE algorithm with a linear Ansatz with 1 ITD edge, and measure the AR as the measured energy from QITE divided by the lowest energy obtained from GW. We also measure the probability of obtaining this lowest energy value as the probability of obtaining the ground state, P(GS).First, we perform QITE on graphs with6≤ N≤ 30vertices and measure the average number of random subgraphs to obtain convergence (P(GS)>0.995overlap) to the ground state. The algorithm is performed repeatedly with different selections for the ITD edge and different random initial states until all attempted graphs have converged to the ground state. We compute this result for 25 and 50 QITE steps on weighted graphs, and also 25 QITE steps on unweighted graphs. We compute this as a function of the number of graph verticesN, shown in Fig. <ref>. We observe similar results for 25 and 50 steps on weighted graphs forN<18, while the unweighted graphs require less resources. ForN>18, the results for 25 and 50 QITE steps begin to diverge, with almost twice the resources required atN=30. At this point, the energy gap between the ground state and first excited state is on the order of 0.01. For larger graphs withN≥20, the state produced from QITE sometimes converges to a state with lower energy than the lowest energy obtained from GW. To account for this, we compute the approximation ratio, the QITE energy divided by the GW energy, for graphs with6<N<150vertices for single selections of random subgraphs and initial states, shown in Fig. <ref>.For 10 QITE steps, the average AR is less than0.90forN>10, indicating that 10 steps are not sufficient to solve larger MaxCut problems. For 25 and 50 steps,we observe the AR initially decrease with graph size to a minimum∼0.90 forN=50vertices, then increase for larger values ofNto a maximum value of∼0.96 (0.94) forN=125vertices at 50 (25) steps. We attribute the decrease to QITE performing better on smallerNwhen GW produces the exact solution, and the increase to GW finding sub-optimal solutions when QITE produces a state with a finite overlap with a lower energy solution. To this end, we also plot the best AR for each number of steps in Fig. <ref>. ForN>10, 25 and 50 vertex QITE remains above1, indicating that QITE produces solutions that are better than those found by the GW classical algorithm. 10-step QITE has a best AR<1forN>100, indicating it produces solutions with higher energies than GW. For unweighted graphs, the average AR remains in the range of 0.92–0.95 for allN, and has a best AR>1.05forN>30. An example of a 20-vertex weighted graph where 50-stepQITE outperforms the GW classical algorithm is shown in Fig. <ref>. The cuts chosen by both algorithms are indicated, in addition to the cuts unique to QITE and GW. QITE chooses a solution with 3 different cuts than GW does. The difference in energy between QITE and GW isΔ E=-0.0404.QITE with a linear Ansatz can be simulated efficiently classically, so the computation can be done for larger graph sizes than considered here without issue. Since here we chose 1 ITD edge, it would be interesting to see if choosing more ITD edges or choosing a different scheme would produce better results on larger graphs. We are currently investigating this in addition to the scaling of QITE on graphs with 500+ vertices.§ LABS Here we discuss how we solve the Low Autocorrelation Binary Sequences (LABS) problem with QITE using both a linear and quadratic Ansatz, present results for problem size up to 28 qubits, and compare them with recent QAOA results <cit.>. The goal of the LABS problem is to minimize the “sidelobeenergy” for a system ofNspinsσ_i∈{+1,-1}with autocorrelation𝒜_k(σ):ℰ_sidelobe(σ)=∑_k=1^N-1𝒜_k^2(σ), 𝒜_k(σ)=∑_k=1^N-kσ_i σ_i+kThis can be mapped to the ground state problem of the quantum Hamiltonianℋ^LABS =2∑_i=1^N-3∑_t=1^N-i-1/2∑_k=1^N-i-tZ_iZ_i+tZ_i+kZ_i+t+k+∑_i=1^N-2∑_k=1^N-i/2Z_iZ_i+2k The quality of a solution can be quantified by the overlap with the ground states,P(σ), and the ratio of the measured cost function over the the exact solution,⟨ℋ ^LABS|/⟩C_max. Note that in the literature, the merit factor is also used to quantify the quality of the solution|ψ⟩, given by ℱ(|ψ⟩)=N^2/2⟨ℋ⟩=∑_𝐬P(σ)ℱ(σ), ℱ(σ)=N^2/2ℰ_sidelobe (σ) As with weighted MaxCut, we introduce imaginary time dependence to improve convergence to the ground state instead of an excited state. Since QITE with a linear Ansatz performs well with a Hamiltonian with quadratic terms, we introduce an ITD coefficientα[s]to the quartic terms of the Hamiltonian, and define the ITD Hamiltonian ℋ^LABS [s]=2α[s]∑_i=1^N-3∑_t=1^N-i-1/2∑_k=1^N-i-tZ_iZ_i+tZ_i+kZ_i+t+k +∑_i=1^N-2∑_k=1^N-i/2Z_iZ_i+2k ,α[s]= a⌊s/a⌋/n_stepswheresare the integer time indices (s=0,…, n_steps), andadetermines whetherα[s]is piecewise or linear (a=1) with respect tos. To improve the results further, we can add another time-dependent parameter,β[s], that depends on the range of the quartic interaction terms, slowly adding increasingly nonlocal Hamiltonian parameters.ℋ^LABS [s]=2α[s]∑_i,t,kβ_itk[s]Z_iZ_i+tZ_i+kZ_i+t+k +∑_i,k Z_iZ_i+2k , where β_itk[s]=b⌊s/b⌋/n_steps, max(t,k,t+k,|t-k|)>R_max1 ,otherwise We reference a LABS solution bank <cit.> when computing the approximation ratio and the overlap with the ground state, P(GS), which is a quantum metric. Using Eq. (<ref>), we perform the QITE algorithm with 40 steps on 50 random initial states comprised of|0⟩and|+⟩for6≤ N≤ 21. We first compute the approximation ratio by dividing the resulting QITE energy by the energy obtained from the solution bank. The average and maximum AR for the 50 initial states are plotted in Fig. <ref>.The probability of measuring the ground state P(GS) is given in Fig. <ref>.We plot the QITE results alongside the P(GS) values forp=10QAOA obtained by <cit.>. We find the average P(GS) of the linear Ansatz QITE comparable to thep=10QAOA, while the best case P(GS) consistently exceeds both values. In Ref. <cit.>, LABS was solved exactly with simulated noiseless QAOA for problem size up toN=40. However, the authors used QAOA level up top=40, which results in a gate depth on the order of10^3for problem sizeN=18. This is challenging to execute on NISQ devices or classical simulators. For comparison,linear Ansatz QITE requires shallow quantum circuits and readily available classical computing resources.Since the linear Ansatz QITE does not give P(GS)=1as in weighted MaxCut, we considered a higher-order quadratic Ansatz. The details of the formulation are as follows.For the quadratic Ansatz, we used one containing the linear terms in addition to terms quadratic inY: A_quad[s] = ∑_j∈ V a_0,j[s] Y_j + ∑_i,j∈ V a_i,j[s] Y_i Y_j It is convenient to define the vector consisting of all operators appearing in Eq. (<ref>),𝒴≡(Y_1,Y_2,…,Y_N,Y_1Y_2,Y_1Y_3,…,Y_1Y_N,Y_2Y_3,…), where the quadratic terms contain all unique choices of pairs of vertices. Evidently, this vector is of lengthN(N+1)/2.As in the linear Ansatz case, the coefficientsa_I[s], whereI = { i,j }, obey the linear system of equations S·a = bwhere we definedS_IJ[s]=⟨𝒴_I 𝒴_J | ⟩,b_J[s] = -i/2⟨ [ ℋ^LABS,𝒴_J] |⟩where all expectation values are evaluated with respect to the state|Ψ[s-1]⟩obtained in the previous step. If the initial state is chosen to be the tensor product of eigenstates ofXandZ, then we haveS=𝕀, and thereforea=b.The algorithm then proceeds as in the linear Ansatz case with updates of the form|Ψ [s]⟩ = e^-iτ A_quad[s]|Ψ[s-1]⟩ The quadratic Ansatz gives a minor improvement in the P(GS) over the linear Ansatz,as shown in Fig. <ref>. Therefore, we do not see a benefit to including higher-order terms in the Ansatz in the range of5<N<10. It is conceivable that there may be benefits at larger problems sizes beyond our current resources to test. In <cit.>, the scaling advantage of QAOA over classical solvers was found in the range of28≤N≤ 40, which is larger than the problem sizes we tested. Therefore, we need to investigate the performance of linear and quadratic Ansatz QITE compared to QAOA and classical algorithms in this range.The quadratic Ansatz simulation requires significantly more computational resources and precision than the linear Ansatz, but we are currently working on extending our results to larger problem sizes. § CONCLUSION The QITE algorithm effectively cools a system to zero temperature at which the system settles to its ground state reaching the lowest energy level of its Hamiltonian. By encoding combinatorial optimization problems in terms of a quantum Hamiltonian, one can solve these problems by finding the ground state of the Hamiltonian corresponding to the optimal solution. QITE allows for a flexible Ansatz which can be chosen to be separable or entangling. Although solutions ofcombinatorial optimization problems involve separable states, quantum algorithms, such as QAOA, introduce entangling operations to solve these problems. In this work, we investigated the performance of QITE on PUBO problems, concentrating on the weighted MaxCut and LABS combinatorial optimization problems. Our method was a generalization of the approach introduced in Ref. <cit.> that was successfully applied to unweighted MaxCut. In general, we expected an increased difficulty in PUBO cases due to smaller gaps between the ground and first excited state energies. We compared the performance of QITE on NWS graphs with up to 150 vertices with the commonly used classical algorithm GW, which is widely believed to offer the best performance guarantee. We found that QITE with a linear Ansatz and ITD edges often outperforms the classical GW algorithm on weighted MaxCut instances after a sufficient number of steps, yielding an average AR∼0.95forN>100vertices.Our separable Ansatz can be simulated efficiently classically. In Ref. <cit.>, the performance of noiseless QAOA was assessed on MaxCut problems up to 17 vertices, and was observed that levelp=5was required for results to be comparable with GW. Especially with larger graphs, this is outside of the range of NISQ devices. In Ref. <cit.>, it was estimated that graph sizes of several hundreds to thousands of vertices are required for quantum advantage over classical solvers on the MaxCut problem.Our results indicate that much larger graphs are needed to be considered in order to observe quantum advantage. Given the attendant increase in the depth of the quantum circuit, a quantum computation may not be suitable on NISQ devices. It is important to analyze the applicability of our method further to better assess the utility of NISQ devices.We also analyzed the performance of QITE on the LABS problem. This is a PUBO probem, as the quantum Hamiltonian encoding the LABS problem includes quartic terms in the PauliZmatrix. We tested the performance of QITE with a separable linear Ansatz and a randomly chosen separable initial state.We compared to known solutions of the problem from a solution bank, and calculated the AR and probability of measuring the ground state, P(GS). Although in general we did not obtain convergence to the ground state, we found that the average P(GS) of the linear Ansatz QITE with 40 steps was comparable top=10QAOA <cit.>. More importantly, the best case P(GS) for each problem size was consistently higher than the QAOA results, indicating that an appropriate choice of initial state gives a high probability of solving the LABS problem. Expecting improvement and possibly quantum advantage to be found using an entangling Ansatz containing higher-order terms, we considered an entangling Ansatz with both linear and quadratic terms. Comparing it to the separable linear Ansatz, we found that the entangling Ansatz gave a P(GS) which was no more than 10% higher than the result from the linear Ansatz for problem sizes up toN=9. Thus, the entangling quadratic Ansatz did not perform significantly better than the separable linear Ansatz in the problem size range we tested, indicating that we might need to explore larger problem sizes to see a significant advantage due to quantum entanglement. Obtaining quantum advantage in combinatorial optimization problems appears to require larger problem sizes than those that can be handled by NISQ devices, or, perhaps, more complicated PUBO problems than the ones studied here. Both research directions, i.e., searching for a combinatorial optimization problem where an entangling Ansatz outperforms the linear Ansatz significantly even at sufficiently small problem sizes implementable on NISQ devices, as well as simulating large enough problem among those considered here where an entangling Ansatz outperforms the linear Ansatz, are currently being pursued.We thank Phillip C. Lotshaw for useful discussions. This work was supported by the DARPA ONISQ program under award W911NF-20-2-0051, and NSF award DGE-2152168.26 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Barahona(1982)]BarahonaF1982Otcc authorauthorF. Barahona, @noop journaljournalJournal of physics. A, Mathematical and general volume15, pages3241 (year1982)NoStop [Farhi et al.(2014)Farhi, Goldstone, and Gutmann]farhi2014quantum authorauthorE. Farhi, authorJ. Goldstone, and authorS. Gutmann,@noop journaljournalarXiv preprint arXiv:1411.4028(year2014)NoStop [Dupont et al.(2023)Dupont, Evert, Hodson, Sundar, Jeffrey, Yamaguchi, Feng, Maciejewski, Hadfield, Alam, Wang, Grabbe, Lott, Rieffel, Venturelli, and Reagor]DupontQuantum-enhanced authorauthorM. Dupont, authorB. Evert, authorM. J. Hodson, authorB. Sundar, authorS. Jeffrey, authorY. Yamaguchi, authorD. Feng, authorF. B.Maciejewski, authorS. Hadfield, authorM. S.Alam, authorZ. Wang, authorS. Grabbe, authorP. A. Lott, authorE. G. Rieffel, authorD. Venturelli,and authorM. J. Reagor, 10.1126/sciadv.adi0487journaljournalScience Advances volume9, pageseadi0487 (year2023), http://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/sciadv.adi0487https://www.science.org/doi/pdf/10.1126/sciadv.adi0487NoStop [McArdle et al.(2019)McArdle, Jones, Endo, Li, Benjamin, and Yuan]McArdle2019 authorauthorS. McArdle, authorT. Jones, authorS. Endo, authorY. Li, authorS. C. Benjamin,and authorX. Yuan,10.1038/s41534-019-0187-2journaljournalnpj Quantum Informationvolume5, pages75 (year2019)NoStop [Beach et al.(2019)Beach, Melko, Grover, and Hsieh]beach2019making authorauthorM. J. Beach, authorR. G. Melko, authorT. Grover,and authorT. H. Hsieh, @noop journaljournalPhysical Review B volume100, pages094434 (year2019)NoStop [Love(2020)]love2020cooling authorauthorP. J. Love, @noop journaljournalNature Physics volume16, pages130 (year2020)NoStop [Motta et al.(2020)Motta, Sun, Tan, O’Rourke, Ye, Minnich, Brandao, andChan]motta2020determining authorauthorM. Motta, authorC. Sun, authorA. T. Tan, authorM. J. O’Rourke, authorE. Ye, authorA. J. Minnich, authorF. G.Brandao,and authorG. K.-L.Chan, @noop journaljournalNature Physics volume16, pages205 (year2020)NoStop [Gomes et al.(2020)Gomes, Zhang, Berthusen, Wang, Ho, Orth, and Yao]gomes2020efficient authorauthorN. Gomes, authorF. Zhang, authorN. F. Berthusen, authorC.-Z. Wang, authorK.-M. Ho, authorP. P. Orth,and authorY. Yao, @noop journaljournalJournal of Chemical Theory and Computation volume16, pages6256 (year2020)NoStop [Yeter-Aydeniz et al.(2020)Yeter-Aydeniz, Pooser, and Siopsis]yeter2020practical authorauthorK. Yeter-Aydeniz, authorR. C. Pooser,and authorG. Siopsis, @noop journaljournalnpj Quantum Information volume6, pages1 (year2020)NoStop [Yeter-Aydeniz et al.(2021)Yeter-Aydeniz, Gard, Jakowski, Majumder, Barron, Siopsis, Humble, and Pooser]yeter2021benchmarking authorauthorK. Yeter-Aydeniz, authorB. T. Gard, authorJ. Jakowski, authorS. Majumder, authorG. S. Barron, authorG. Siopsis, authorT. S. Humble,and authorR. C. Pooser, @noop journaljournalAdvanced Quantum Technologies , pages2100012 (year2021)NoStop [Barison et al.(2020)Barison, Galli, and Motta]2020arXiv201108137B authorauthorS. Barison, authorD. E. Galli,and authorM. Motta, @noop journaljournalarXiv e-prints , eidarXiv:2011.08137 (year2020),http://arxiv.org/abs/2011.08137arXiv:2011.08137 [quant-ph]NoStop [Kamakari et al.(2021)Kamakari, Sun, Motta, and Minnich]kamakari2021digital authorauthorH. Kamakari, authorS.-N. Sun, authorM. Motta,and authorA. J. Minnich, @noop journaljournalarXiv preprint arXiv:2104.07823 (year2021)NoStop [Ville et al.(2021)Ville, Morvan, Hashim, Naik, Mitchell, Kreikebaum, O'Brien, Wallman, Hincks, Emersonet al.]ville2021leveraging authorauthorJ.-L. Ville, authorA. Morvan, authorA. Hashim, authorR. K. Naik, authorB. Mitchell, authorJ.-M. Kreikebaum, authorK. P. O'Brien, authorJ. J. Wallman, authorI. Hincks, authorJ. Emerson,et al., @noop journaljournalarXiv preprint arXiv:2104.08785(year2021)NoStop [Cao et al.(2021)Cao, An, Hou, Zhou, andZeng]cao2021quantum authorauthorC. Cao, authorZ. An, authorS.-Y. Hou, authorD. L. Zhou,and authorB. Zeng, @noop titleQuantum imaginary time evolution steered by reinforcement learning,(year2021), http://arxiv.org/abs/2105.08696arXiv:2105.08696 [quant-ph]NoStop [Nishi et al.(2021)Nishi, Kosugi, and Matsushita]nishi2021 authorauthorH. Nishi, authorT. Kosugi, and authorY.-i. Matsushita, 10.1038/s41534-021-00409-yjournaljournalnpj Quantum Information volume7,pages85 (year2021)NoStop [Lin et al.(2021)Lin, Dilip, Green, Smith, andPollmann]PRXQuantum.2.010342 authorauthorS.-H. Lin, authorR. Dilip, authorA. G. Green, authorA. Smith,and authorF. Pollmann, 10.1103/PRXQuantum.2.010342journaljournalPRX Quantum volume2, pages010342 (year2021)NoStop [Alam et al.(2023)Alam, Siopsis, Herrman, Ostrowski, Lotshaw, and Humble]Alam2023 authorauthorR. Alam, authorG. Siopsis, authorR. Herrman, authorJ. Ostrowski, authorP. C. Lotshaw,and authorT. S. Humble, 10.1007/s11128-023-04045-7journaljournalQuantum Information Processing volume22,pages281 (year2023)NoStop [Schröder et al.(1997)Schröder, May, Vrt'o, andSýkora]10.1007/3-540-63774-5_137 authorauthorH. Schröder, authorA. May, authorI. Vrt'o,and authorO. Sýkora, in @noop booktitleSOFSEM'97: Theory and Practice of Informatics, editoredited by editorF. Plášil and editorK. G. Jeffery (publisherSpringer Berlin Heidelberg, addressBerlin, Heidelberg,year1997) pp. pages547–554NoStop [Kahruman et al.(2007)Kahruman, Kolotoglu, Butenko, andHicks]articlegreedy authorauthorS. Kahruman, authorE. Kolotoglu, authorS. Butenko,andauthorI. Hicks,10.1504/IJCSE.2007.017827journaljournalInt. J. of Computational Science and Engineering volume1 (year2007), 10.1504/IJCSE.2007.017827NoStop [Mathieu and Schudy(2008)]10.5555/1347082.1347102 authorauthorC. Mathieu and authorW. Schudy, in @noop booktitleProceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, series and numberSODA '08 (publisherSociety for Industrial and Applied Mathematics, addressUSA, year2008) p. pages176–182NoStop [Bian et al.(2015)Bian, Gronskiy, and Buhmann]7133122 authorauthorY. Bian, authorA. Gronskiy, and authorJ. M. Buhmann, in 10.1109/ITW.2015.7133122booktitle2015 IEEE Information Theory Workshop (ITW) (year2015) pp.pages1–5NoStop [Goemans and Williamson(1994)]goemans1994879 authorauthorM. X. Goemans and authorD. P. Williamson, in @noop booktitleProceedings of the twenty-sixth annual ACM symposium on Theory of computing (year1994) pp. pages422–431NoStop [Shaydulin et al.(2023)Shaydulin, Li, Chakrabarti, DeCross, Herman, Kumar, Larson, Lykov, Minssen, Sun, Alexeev, Dreiling, Gaebler, Gatterman, Gerber, Gilmore, Gresh, Hewitt, Horst, Hu, Johansen, Matheny, Mengle, Mills, Moses, Neyenhuis, Siegfried, Yalovetzky, and Pistoia]shaydulin2023evidence authorauthorR. Shaydulin, authorC. Li, authorS. Chakrabarti, authorM. DeCross, authorD. Herman, authorN. Kumar, authorJ. Larson, authorD. Lykov, authorP. Minssen, authorY. Sun, authorY. Alexeev, authorJ. M. Dreiling, authorJ. P. Gaebler, authorT. M. Gatterman, authorJ. A. Gerber, authorK. Gilmore, authorD. Gresh, authorN. Hewitt, authorC. V. Horst, authorS. Hu, authorJ. Johansen, authorM. Matheny, authorT. Mengle, authorM. Mills, authorS. A.Moses, authorB. Neyenhuis, authorP. Siegfried, authorR. Yalovetzky,and authorM. Pistoia, @noop titleEvidence of scaling advantage for the quantum approximate optimization algorithm on a classically intractable problem,(year2023), http://arxiv.org/abs/2308.02342arXiv:2308.02342 [quant-ph]NoStop [Bošković et al.(2016)Bošković, Brglez, and Brest]OPUS2-git_labs-Boskovic authorauthorB. Bošković, authorF. Brglez,and authorJ. Brest, @noop titleA GitHub Archive for Solvers and Solutions of thelabs problem, howpublishedFor updates, see <https://github.com/borkob/git_labs>.(year2016)NoStop [Crooks(2018)]crooks2018performance authorauthorG. E. Crooks,10.48550/arXiv.1811.08419journaljournalarXiv e-prints , eidarXiv:1811.08419 (year2018), http://arxiv.org/abs/1811.08419arXiv:1811.08419 [quant-ph]NoStop [Guerreschi and Matsuura(2019)]Guerreschi_2019 authorauthorG. G. Guerreschi and authorA. Y. Matsuura,10.1038/s41598-019-43176-9journaljournalScientific Reports volume9 (year2019), 10.1038/s41598-019-43176-9NoStop | http://arxiv.org/abs/2312.16664v1 | {
"authors": [
"Nora M. Bauer",
"Rizwanul Alam",
"James Ostrowski",
"George Siopsis"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231227181812",
"title": "Combinatorial optimization with quantum imaginary time evolution"
} |
BAL: Balancing Diversity and Novelty for Active Learning Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Member, IEEE and Jiaya Jia, Fellow, IEEE Jingyao Li and Shaozuo Yu are with the Department of Computer Science and Engineering of the Chinese University of Hong Kong (CUHK) Jingyao Li's E-mail: [email protected] Pengguang Chen, Shu Liu, and Jiaya Jia are with SmartMore. Manuscript received Feb 8th, 2023. Received ...; accepted... ==================================================================================================================================================================================================================================================================================================================================================================================================================== Panoramic imaging research on geometry recovery and High Dynamic Range (HDR) reconstruction becomes a trend with the development of Extended Reality (XR). Neural Radiance Fields (NeRF) provide a promising scene representation for both tasks without requiring extensive prior data. However, in the case of inputting sparse Low Dynamic Range (LDR) panoramic images, NeRF often degrades with under-constrained geometry and is unable to reconstruct HDR radiance from LDR inputs. We observe that the radiance from each pixel in panoramic images can be modeled as both a signal to convey scene lighting information and a light source to illuminate other pixels. Hence, we propose the irradiance fields from sparse LDR panoramic images, which increases the observation counts for faithful geometry recovery and leverages the irradiance-radiance attenuation for HDR reconstruction. Extensive experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction and validate their effectiveness. Furthermore, we show a promising byproduct of spatially-varying lighting estimation. The code is available at <https://github.com/Lu-Zhan/Pano-NeRF>.§ INTRODUCTIONPanoramic imaging stands as a trend with the rise of extended reality (XR) for achieving immersive experiences, such as virtual walks in 360^∘ scenes and inserting virtual objects with 360^∘ lighting information. These XR applications motivate panoramic imaging techniques, particularly in the tasks of geometry recovery <cit.> and High Dynamic Range (HDR) reconstruction <cit.> under a panoramic scene. Previous research yields promising outcomes via supervised learning on extensive pre-collected datasets or under specific predetermined conditions, , a stereo camera system for geometry recovery <cit.> or controllable multi-exposure for HDR reconstruction <cit.>. However, the efficacy of these approaches is profoundly influenced by the quality of the underlying training data and the imposed conditions.Recently, Neural Radiance Fields (NeRF) <cit.> emerged with a promising scene representation to recover geometry and radiance information through self-supervised training from multi-view images, which avoids extensive pre-collected data requirement. Later research on sparse-view NeRF further improves the practicability of NeRF technique by reducing the number of multi-view images, where the recovered geometry is constrained by several priors, , depth <cit.>, visibility <cit.>, and semantic feature <cit.>. Besides, several NeRF-based models exhibit the capability to reconstruct HDR radiance by requiringcasual multi-exposed LDR images <cit.> or a pre-trained HDR reconstruction model <cit.>. However, NeRF-based models encounter two primary limitations: 1) existing sparse-view NeRF might fail to recover accurate geometry for a panoramic scene by using priors from objects rather than a panoramic scene. And the scale variety of nearby/far objects in the panoramic scene further increases the trouble <cit.>; 2) they are unable to reconstruct HDR radiance from LDR inputs captured under a fixed exposure, due to the lack of a mechanism to address the ill-posed problem of HDR reconstruction.Conversely, panoramic images exhibit a remarkable feature, where the radiance emitted by each pixel serves both as a signal, conveying scene lighting information to cameras through intrinsic factors (, position, surface normal, and albedo) and as a source light, illuminating other pixels within the scene. Based on this observation, this paper introduces irradiance fields to model the inter-reflection in panoramic scenes, via surface rendering <cit.>. The irradiance fields consider the irradiance received from all incident light directions upon a given surface point and integrate them with the intrinsic factors of the surface point to yield the observed outgoing radiance. Hence, the irradiance fields bring a distinct capability to recover faithful geometry through the augmentation of the observation counts for volumetric particles with sparse inputs and reconstruct HDR radiance from multi-view LDR inputs by considering irradiance-radiance attenuation, shown in fig:irr_diff. Furthermore, considering the feature of the radiance within panoramic images,the proposed irradiance fields could be within the existing radiance fields. Therefore, we integrate our irradiance fields into radiance fields and perform joint optimization. Our contributions are summarized as * We propose the irradiance fields from sparse LDR panoramic images and explain how irradiance fields contribute to geometry recovery and HDR reconstruction.* We demonstrate the irradiance fields could be integrated into and jointly optimized with radiance fields, based on which we propose Pano-NeRF for geometry recovery and HDR novel views synthesis.* We show that Pano-NeRF achieves state-of-the-art performance regarding geometry recovery and HDR reconstruction.We further provide a byproduct of spatially varying lighting estimation.§ RELATED WORK Panoramic imaging. We introduce previous works on geometry recovery and HDR reconstruction dealing with panoramic images. For geometry recovery, <cit.> requires a stereo panoramic camera setup. Several works estimate depth from the video recorded under free viewpoints <cit.>. Recent works focus on the single-image panoramic geometry recovery via deep learning techniques and share the idea of the fusion of perspective depth estimation for a panoramic image <cit.>. For HDR reconstruction, previous methods require multi-exposure LDR images for static poses <cit.> or dynamic camera poses <cit.>. Deep learning boosts the research on single-image HDR reconstruction on perspective images <cit.> or panoramic images <cit.>). Our method only requires sparse LDR panoramic images for geometry recovery and HDR reconstruction, to avoid extensive prior data or imposed conditions.Neural radiance fields. For geometry recovery by NeRF-based models, we only study sparse-view conditions. Several works solve this problem by aggregating prior knowledge through pre-trained conditional radiance fields, , deep features <cit.>, and 3D cost volume <cit.>. Other works constrain appearance consistency on the seen/unseen views to avoid pre-training <cit.>. However, this work still recovers a blurry geometry since appearance regularizers cannot directly constrain the scene geometry. Recent efforts on applying geometry priors would improve the recovered geometry, such as pre-estimated depth (DS-NeRF <cit.> and DD-NeRF <cit.>), depth smoothness (RegNeRF <cit.> and FreeNeRF <cit.>), and geometry visibility (Ref-NeRF <cit.> and ViP-NeRF <cit.>). 360FusionNeRF <cit.> requires a single RGBD panoramic image and trains NeRF by warping the RGBD inputs.IndoorPanoDepth <cit.> builds a Signed Distance Field (SDF) and recovers geometry at the input views from sparse panoramic images by introducing a well geometry initialization. For HDR reconstruction, Raw-NeRF <cit.> proposes a model working with raw images for HDR radiance fields. HDR-NeRF <cit.> builds HDR radiance fields with multi-exposure LDR inputs and paired exposure time. HDR-Plenoxels <cit.> reduces the exposure time requirement and conducts self-calibrated for multi-exposure inputs. PanoHDR-NeRF <cit.> reconstructs HDR radiance from pre-estimated HDR panoramic images by an existing panoramic HDR reconstruction method. The proposed irradiance fields help the geometry recovery and HDR reconstruction model by modeling inter-reflection within a panoramic scene, which is free of any pre-trained model, prior training data, or extra information. Implicit reflectance representation. Previous methods propose reflectance fields to estimate intrinsic factors under several conditions, , known lighting <cit.>, or known geometry(<cit.>). Some methods distill the trained radiance fields to bake intrinsic factors and lighting conditions (NeRD <cit.> and NeRFactor <cit.>), or indirect illumination <cit.>. Besides, TexIR <cit.> models the irradiance information with several HDR panoramic images (, 14), pre-estimated mesh-based geometry, and semantic priors. Our irradiance fields only require LDR inputs without any prior knowledge and exhibit the simplicity of sharing the scene representation with the same radiance fields. § IRRADIANCE FIELDS §.§ Modeling Irradiance FieldsPreliminary of radiance fields. The radiance fields in NeRF <cit.> assume that the scene is composed of a cloud of light-emitting particles and the volumetric particles have emission and transmittance to pass through the radiance from other volumetric particles <cit.>, as shown in fig:irr_model (a). The outgoing radiance 𝐂^r of position 𝐱 is computed through volume rendering, , integrate radiance 𝐜^r of different volumetric particles (weighted by w^r) distributed along the camera ray 𝐫 between viewpoint 𝐨 and 𝐱. Due to the unavailability of 𝐱, 𝐫 is formulated as 𝐫(t) = 𝐨 + t ω_o, t∈[t_n, t_f]with view direction ω_o, and near/far bounds t_n/t_f. The calculation of outgoing radiance 𝐂^r is formulated as[We provide a more comprehensive formulation showing the relationship between different variables according to the implementation in <cit.>.],𝐂^r(𝐱) = 𝐂^r(𝐫) = ∫^t_f_t_n w^r(𝐨, ω_o, t) ·𝐜^r(𝐨, ω_o, t) d t, s.t. w^r(𝐨, ω_o, t)= exp(-∫^t_t_nσ(𝐨, ω_o, t) d t) ·σ(𝐨, ω_o, t),where 𝐜^r represents the radiance color of each volumetric particle that determines emission, and w^r is the weight calculated based on the density σ of volumetric particles. The radiance fields in NeRF <cit.> are implemented based on a multilayer perception (MLP) network, MLP(𝐫) → (𝐜^r, σ), Irradiance fields formulation.Our irradiance fields assume that the scene is composed of several surface pieces with reflection effect only and the radiance is the interaction result of incoming light, reflectance, and surface normal, as shown in fig:irr_model (b). Different from the radiance fields, our irradiance fields compute the outgoing radiance of position 𝐱 through surface rendering <cit.>, , integrate irradiance ω_i 𝐧^⊤·𝐜^i around different incoming light directions ω_i distributed on sphere Ω centered at 𝐱,𝐂^i(𝐱) =∫_ω_i∈Ω f_r(𝐱, ω_o, ω_i) ·ω_i 𝐧^⊤·𝐜^i(𝐱, ω_i) dω_i,where 𝐜^i indicates the radiance from each incoming light directionand f_r is the bidirectional reflection distribution function (BRDF), 𝐧 is the surface normal at point 𝐱. Note ω_i 𝐧^⊤ is replaced as max(ω_i 𝐧^⊤, 0) in the implementation. Geometry recovery with sparse inputs. As illustrated in fig:irr_diff (left-top), sparse inputs bring a small number of observation counts, which might cause unconstrained geometry <cit.>. As can be observed from fig:irr_diff (left-bottom), the irradiance fields generate several rays along the incident light direction ω_i for each position 𝐱. The 𝐜^i is equal to the radiance 𝐂^r of each incident ray that could be calculated in eqn:volume. Thus, the proposed irradiance fields are expected to increase the observation counts for volumetric particles (fig:irr_diff (left)) by considering the inter-reflection between irradiance and outgoing radiance. Although 𝐜^i is not directly supervised, its integral is constrained through the training error between the observed radiance 𝐂_gt and 𝐂^i, hence facilitates the optimization of σ and 𝐜^r. Therefore, the proposed irradiance fields could achieve faithful geometry recovery even with sparse inputs.HDR reconstruction with LDR inputs. The HDR radiance could be well restored from LDR one if it is unsaturated (, applying inverse tone mapping <cit.>). The primary difficulty of HDR reconstruction is from over-saturated regions, where the clipping operation drops vital information.Fortunately, over-saturated regions do not always cover the whole panoramic image (, most are light sources in the indoor scene), and the proposed irradiance fields could leverage the radiance in unsaturated regions to reconstruct HDR radiance. As illustrated in fig:irr_diff (right), the basic idea is to back-propagate the training error (from points in unsaturated regions) between the observed radiance 𝐂_gt (from input images) and 𝐂^i to optimize HDR 𝐜^i in over-saturated regions, based on eqn:irr_model. This can be achieved due to the attenuation effect brought by the BRDF (, ϕ<1) and the cosine of the incident angle (, ω_i𝐧^⊤<1). Additionally, 𝐜^i in over-saturated regions could be further optimized based on the observed radiance 𝐂^r according to eqn:cc.§.§ Optimizing Irradiance Fields This section introduces how to integrate our irradiance fields into the radiance fields and perform joint optimization. Specifically, we focus on the calculation of outgoing radiance 𝐂^i based in eqn:irr_model.Obtaining intrinsic factors. We follow a numerical solution <cit.> sharing a similar idea as volume rendering <cit.> in eqn:volume to calculate the position of surface point 𝐱 by𝐱 = 𝐨 + (∫^t_f_t_n w^r(𝐨, ω_o, t) · t d t)ω_o,where w^r(𝐨, ω_o, t) is same as that in eqn:volume.As suggested by many scene understanding works (, <cit.>), we assume the BRDF in our irradiance fields to be diffuse (, Lambertain model), , f_r only depends on the position of 3D point 𝐱 while being free from the view direction,f_r(𝐱) = f_r(𝐫) = ∫^t_f_t_n w^r(𝐨, ω_o, t) ·ϕ(𝐨, ω_o, t) d t,where ϕ_r(𝐨, ω_o, t) is the albedo for a volumetric particle. Though the diffuse assumption limits the expression of non-Lambertain surfaces, it affects less since 1) non-Lambertian surfaces occupy less area than the Lambertian ones in practice and 2) we leverage the outputs from radiance fields (non-Lambertain model) as the reconstructed results. According to <cit.>, the density σ(𝐨, ω_o, t) for each volumetric particle can be leveraged to calculate its surface normal 𝐧(𝐨, ω_o, t), 𝐧(𝐨, ω_o, t)=-d σ(𝐨, ω_o, t)/d σ(𝐨, ω_o, t),and the surface normal for position 𝐱 can be numerically calculated by 𝐧(𝐱) = ∫^t_f_t_n w^r(𝐨, ω_o, t) ·𝐧(𝐨, ω_o, t) d t,Note that each surface normal is normalized to be a unit vector before further calculation.Obtain radiance of incident light rays. As shown in fig:framework, we first sample the incident light directions ω_i on the surface point 𝐱 to determine the incident light rays 𝐫^'(t) = 𝐱 + tω_i, then obtain the radiance 𝐜^i(𝐱, ω_i),𝐜^i(𝐱, ω_i) = 𝐂^r(𝐫^'), Calculating outgoing radiance. The outgoing radiance 𝐂^i(𝐱) is then calculated with the estimated BRDF f_r, surface normal 𝐧(𝐱) and a grouped radiance {𝐜^i} of incident light rays via surface rendering in eqn:irr_model.In summary, the proposed irradiance fields can be jointly optimized with radiance fields since all variables except ϕ can be directly calculated from radiance fields. We modify the MLP in a NeRF-based method to output an additional variable ϕ, and eqn:ra_func is revised asMLP(𝐫) → (𝐜^r, σ, ϕ), §.§ Pano-NeRFThe overview of irradiance fields is displayed in fig:framework.Network structure. We build our irradiance fields upon the model of Mip-NeRF <cit.> and modify it in the following aspects. First, we increase the output range of 𝐜^r to consider HDR radiance, by changing the output activation function of ReLU to Softplus. Second, we add output ϕ∈𝐑^3 to the MLP for albedo estimation.Geometry prior. The recovered geometry in the radiance fields often reveals a thick surface (`fog' as demonstrated in <cit.>, even with dense inputs) and rough surface <cit.>. Such geometry might limit the reconstruction of irradiance fields as the irradiance fields require a rough geometry for ray sampling and surface rendering computation. Therefore we adopt the geometry prior ℛ_v indicated in <cit.> to produce thin and smooth surface, that is,ℛ_v = ∑_𝐨, ω_o (∫_t_n^t_f w^r(𝐨, ω_o, t) ·max(-ω_o𝐧^⊤(𝐨, ω_o, t), 0)^2 dt),The minimization of ℛ_v ensures a thin and smooth surface by preventing the model from expressing too many visible volumetric particles around the surface.Albedo priors. As we only leverage the observed pixel color for optimization, the freedom of the new variable ϕ might cause the radiance color distortion as the neural network is more easily to express color variety by ϕ rather than the irradiance. To mitigate the impact from ϕ, we take two priors to constrain the albedo estimation of our model. First, we limit the range of the estimated ϕ with [0.03, 0.8] as suggested in <cit.>. Second, we encourage the chromaticity of the estimated f_r to be similar to that of corresponding observed radiance 𝐂_gt,ℛ_c =∑_𝐱∈Φf_r(𝐱)/f_r(𝐱)-𝐂_gt(𝐱)/𝐂_gt(𝐱)^2_2,where set Φ contains positions of all surface points in a scene. Such constraints of the albedo are able to maintain the chromaticity of the reconstructed radiance color, which is enough to produce pleasurable results.Loss function. We adopt the same loss function from Mip-NeRF to constrain the estimated radiance via volume rendering. To convert HDR radiance output to LDR value, we apply empirical tone-mapping <cit.>, clipping (into [0, 1]), and gamma correction (factor of 2.2) on the output radiance to obtain mapped LDR value. MSE loss ℒ_r in the radiance fields is calculated in coarse and fine phases,ℒ_r=∑_𝐨, ω_o(α_1 g(𝐂^r_c(𝐨, ω_o))-𝐂_gt(𝐨, ω_o)^2_2 + g(𝐂^r_f(𝐨, ω_o))-𝐂_gt(𝐨, ω_o)^2_2),where 𝐂^r_c and 𝐂^r_f are the estimated radiance in coarse and fine phases, respectively.g(·) represents the process of HDR to LDR conversion. α_1 is set to 0.1 as in Mip-NeRF.The MSE loss ℒ_i for the irradiance fields is calculated only in the fine phase as,ℒ_i=∑_𝐨, ω_og(𝐂^i(𝐨, ω_o))-𝐂_gt(𝐨, ω_o)^2_2, The overall loss function is calculated with the geometry prior ℛ_v, and the albedo prior ℛ_c,ℒ= ℒ_r + α_2ℒ_i + α_3ℛ_v + α_4ℛ_c,We empirically set α_2 = 1, α_3 = 0.1, and α_4 = 1 to balance loss scale and stabilize the model training. § EXPERIMENTS §.§ SetupSynthetic data. We render evaluation data from 5 3D scene models, including `classroom', `barbershop', `living room', `bedroom', and `gallery'. Besides, we collect the data from the 3D scanned HDR dataset Replica <cit.> and select 8 enclosed scenes, including `apartment', `hotel', `bathroom', `office-0', `office-1', `office-4', `room-0', and `room-1'. For each scene, we sample 100 camera poses and render the LDR panoramic image and the paired ground truths (, HDR panoramic image, depth maps, and surface normal map) at each sampled camera pose.Real captured data. We capture 30 HDR panoramic images in 3 real scenes, including `real-meeting room', `real-bedroom', and `real-classroom', representing the large, medium, and small indoor spaces. Besides, we scan each scene with the LiDAR camera in iPhone 13 Pro to obtain the depth and surface normal for reference. We implement existing tool openMVG[<https://github.com/openMVG/openMVG>] to calibrate camera poses and merge HDR panoramic images with 9 bracketed exposures. Evaluation Settings To evaluate irradiance fields with sparse inputs, for each time training, we randomly select 3 LDR panoramic images to train models as the worst case in <cit.>. After optimization, the rest panoramic images are used for evaluation. We repeat this procedure 10 times for each scene and compute the mean performance to alleviate the randomness of the results. The resolution of the panoramic image is set to 128 × 256.Training detailsThe sample numbers of volumetric particles along a camera ray and an incoming lighting ray are 64 and 10, respectively. The sample number of directions for incoming lighting rays towards each surface point 𝐱 is 80. We train Pano-NeRF with the following settings: 44k optimization iterations, using Adam <cit.> optimizer with hyper-parameters β_1=0.9, β_2=0.999, ϵ=10^-6, a log-linearly annealed learning rate from 2 × 10^-4 to 2 × 10^-5 with a warm-up phase of 2500 iterations. Our model and ablations take 512 as the batch size to keep a close number of sampled volumetric particles per iteration with Mip-NeRF <cit.>, whose batch size is 4096. We optimize the only radiance fields at the first 8.8k iterations and then perform joint optimization on two fields.§.§ Overall performance Geometry Recovery from sparse inputs. To validate the effectiveness of geometry recovery, we compare our method with two panoramic depth estimation methods, including Omnifusion <cit.> and IndoorPanoDepth <cit.>. Omnifusion is a deep learning-based method trained with large-scale data,while IndoorPanoDepth is an SDF-based method using the same sparse panoramic inputs as Pano-NeRF. We also add the two state-of-the-art sparse-view NeRF-based methods, , RegNeRF <cit.>, FreeNeRF <cit.>. RegNeRF is trained only with its proposed geometry regularization since appearance regularization is not available in their released codes. We set the frequency attenuation as used for facing forward scenes as suggested in FreeNeRF. Besides, we take Mip-NeRF <cit.> and its ablated version Mip-NeRF w/ ℛ_v as the baseline methods. We report the same geometry metrics used in tab:ablation, and LDR image quality metrics including PSNR, SSIM <cit.>, and LPIPS <cit.>. Note that geometry metrics are only computed on synthetic datasets as the real one has no aligned ground truth. As demonstrated in tab:nvs, Pano-NeRF outperforms others on both geometry recovery. fig:nvs demonstrate the qualitative comparison of geometry recovery, where we have the same conclusion that Pano-NeRF could achieve more faithful depth and accurate surface normal. Besides, Pano-NeRF provides the best performance on novel view synthesis, as shown in tab:nvs and fig:nvs (estimating a clear panoramic image with fewer artifacts.). This reason could be that a well-recovered geometry is crucial to novel view synthesis in the case of sparse inputs. Furthermore, by comparing the Mip-NeRF and Mip-NeRF w/ ℛ_v, we observe that directly applying geometry prior ℛ_v could only benefit to a smoother surface but no improvement on depth estimation or cause the severe degradation on novel views (shown infig:nvs). In contrast, the irradiance fields not only benefit from the smooth surface for better geometry but also prevent the collapse of geometry recovery. These results validate the proposed irradiance fields do help to improve the geometry recovery from sparse inputs.HDR Reconstruction from LDR inputs. To validate the effectiveness of HDR reconstruction, we compare our method with the state-of-the-art panoramic HDR reconstruction method LANet <cit.> and three NeRF-based methods, including PanoHDR-NeRF <cit.>, RegNeRF <cit.>, and our baseline Mip-NeRF <cit.> with ℛ_v. Since RegNeRF is not implemented for the HDR reconstruction, we take pre-estimated HDR panoramic images from LANet for its training, namely `RegNeRF w/ HDR'. In this experiment, only the reconstructed HDR panoramic images at input viewpoints are evaluated on each method to keep the same view requirement of LANet. We adopt commonly used HDR metrics for HDR image evaluation, including PU-PSNR <cit.>, PU-SSIM <cit.>, HDR-VDP3 <cit.>, and RMSE.tab:hdr demonstrates the quantitative comparison. LANet reports limited performance of HDR reconstruction, as they might suffer from poor generalization from their training data to testing scenarios.NeRF-based methods PanoHDR-NeRF and RegNeRF w/ HDR keep a close performance with LANet since they could only learn HDR information from the noisy HDR reconstructed results. Besides, RegNeRF w/ HDR drops significantly in terms of PU-SSIM, which indicates that the inconsistent HDR reconstruction of multi-views might cause degradation in the novel view synthesis. Mip-NeRF w/ ℛ_v demonstrates better results as it takes advantage of the tone mapping procedure. In contrast, Pano-NeRF achieves the best performance on HDR reconstruction even compared with the baseline. Observed from in fig:hdr, we find that Pano-NeRF accurately predicts the light source where often over-saturated as the HDR reconstruction results are close to the ground truth. The reason could be that our irradiance fields constrain these over-saturated pixels by the information learned from the unsaturated area.§.§ Validation of Irradiance FieldsTo validate the effectiveness of the proposed irradiance fields, we take the Mip-NeRF <cit.> as our baseline method representing radiance fields. Then, we simply integrate the proposed irradiance fields into the baseline, namely `Mip-NeRF + Irr', as the counterpart. Besides, we study the impact of the geometry prior ℛ_v as it could be applied to both methods, namely `Mip-NeRF + ℛ_v' and `Mip-NeRF + Irr + ℛ_v', respectively. We report linear Root Mean Square Error (RMSE) for the estimated depth, Mean Angle Error (MAE) (in degree) for the estimated surface normal (`normal' for short), and HDR-VDP3 <cit.> for HDR novel view synthesis. Note we conduct this experiment only on 13 synthetic scenes with the ground truth of geometry. Observed from tab:ablation, irradiance fields benefit the performance on both tasks. Specifically, irradiance fields improve significantly in the depth estimation and HDR novel view quality by simply adding to the radiance fields and performing joint optimization. Besides, we find that ℛ_v could further boost the performance of irradiance fields, in contrast, it only contributes to the better surface normal when applying to the radiance fields.We demonstrate an example of reconstructed geometry in fig:pc.We found that with the help of irradiance fields, the reconstructed geometry would be more accurate to the ground truth than that in radiance fields (Mip-NeRF) from sparse inputs. Besides, ℛ_v could lead to a smoother surface as shown in fig:pc but might significantly degrade the depth estimation, which is not suitable for novel view synthesis with an unconstrained depth. In contrast, the irradiance fields benefit from the smooth surface fromℛ_v and predict a more faithful depth. §.§ Byproduct of Spatially-varying Lighting EstimationWe demonstrate that Pano-NeRF could be further used for spatially varying lighting estimation, as a byproduct of HDR novel view synthesis. We compare Pano-NeRF with aforementioned NeRF-based methods and several spatially-varying lighting estimation methods, including InvIndoor <cit.>, Lighthouse <cit.>, and LRG360 <cit.>[Implementation is at <https://github.com/Lu-Zhan/Pano-NeRF>.]. As shown in fig:sv_lit, Lighthouse and InvRender produce low-frequency lighting maps, and LRG360 brings the artifacts into the lighting maps. For NeRF-based methods, RegNeRF w/ HDR and PanoHDR-NeRF keep similar results on HDR novel view synthesis, while they suffer from inconsistent HDR reconstruction for different views to produce blurry lighting maps. Conversely, Pano-NeRF outperforms others as it provides spatially-varying HDR lighting maps with the richest details close to the ground truth, which benefits more realistic mirror-like objects inserted with accurate specular. § CONCLUSIONIn summary, this paper introduces irradiance fields as a novel solution for geometry recovery and HDR reconstruction in the context of sparse LDR panoramic images.The irradiance fields consider the inter-reflection in panoramic scenes, to recover faithful geometry from sparse inputs by increasing observation counts of volumetric particles and to reconstruct accurate HDR radiance from LDR inputs by yielding irradiance-radiance attenuation. We integrate the irradiance fields into a common scene representation of radiance fields and perform joint optimization. Extensive experiments demonstrate superior performance both on the tasks of geometry recovery and HDR reconstruction and validate the effectiveness of irradiance fields. We further provide a byproduct of spatially varying lighting estimation. § ACKNOWLEDGMENTSThis research is supported in part by the Centre for Information Sciences and Systems (CISS) of School of Electrical & Electronic Engineering and CARTIN at Nanyang Technological University, in part by the State Key Lab of Brain-Machine Intelligence at Zhejiang University, and in part by the National Natural Science Foundation of China (Grant Nos. 62088102 and 62136001). | http://arxiv.org/abs/2312.15942v1 | {
"authors": [
"Zhan Lu",
"Qian Zheng",
"Boxin Shi",
"Xudong Jiang"
],
"categories": [
"cs.CV",
"eess.IV"
],
"primary_category": "cs.CV",
"published": "20231226081022",
"title": "Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry from Sparse Low Dynamic Range Panoramic Images"
} |
Expressivity and Approximation Properties of DeepNeural Networks with ReLU^k Activation Juncai He[1] Tong Mao[1] Jinchao Xu[1] [2]========================================================================================= [1]Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia. [2]Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, USA. In this paper, we investigate the expressivity and approximation properties of deep neural networks employing the ReLU^k activation function for k ≥ 2. Although deep ReLU networks can approximate polynomials effectively, deep ReLU^k networks have the capability to represent higher-degree polynomials precisely. Our initial contribution is a comprehensive constructive proof for polynomial representation using deep ReLU^k networks. This allows us to establish an upper bound on both the size and count of network parameters. Consequently, we are able to demonstrate a suboptimal approximation rate for functions from Sobolev spaces as well as for analytic functions. Additionally, through an exploration of the representation power of deep ReLU^k networks for shallow networks, we reveal that deep ReLU^k networks can approximate functions from a range of variation spaces, extending beyond those generated solely by the ReLU^k activation function. This finding demonstrates the adaptability of deep ReLU^k networks in approximating functions within various variation spaces. § INTRODUCTION Over the past decade, neural networks have achieved remarkable success in a variety of fields, including image recognition <cit.>, speech recognition <cit.>, and natural language processing <cit.>, among others. These achievements have sparked increased interest in the theoretical understanding of neural networks. Theoretically, the generalization error of a neural network can be decomposed into the sum of the approximation error and the sampling error, as detailed in works such as <cit.>. In practice, the reduction of approximation error depends on carefully designed neural network architectures, whereas the control of sample error requires sufficient and reliable data resources. For instance, the approximation capabilities of neural networks with sigmoid activation functions were explored in various works <cit.>. Additionally, the universality of neural networks with non-polynomial activation functions was established in <cit.>. In recent years, due to their simple computational form and the potential to overcome the gradient vanishing problem, neural networks employing the Rectified Linear Unit (ReLU, defined as ReLU(x) = max{0,x}) and ReLU^k = (ReLU)^k activation functions have gained widespread use. This paper focuses on the expressivity and approximation properties of neural networks with ReLU^k activation functions. Given the fact that any ReLU networks are continuous piecewise linear (CPwL) functions, <cit.> initially demonstrated that any CPwL functions on [0,1]^d can be represented by deep ReLU networks with ⌈log_2(d+1)⌉ hidden layers. Subsequently, <cit.> established that shallow ReLU networks cannot represent any piecewise linear function on [0,1]^d with d≥2. In contrast, deep ReLU networks with more layers (but fewer parameters) can recover any linear finite element functions. Recently, <cit.> explored the optimal representation expressivity for continuous piecewise linear functions using deep ReLU networks on [0,1]. By integrating this expressivity with the Kolmogorov Superposition Theorem <cit.>, a significantly improved approximation rate was achieved. Most recently, through a detailed examination of the basis functions and high-dimensional simplicial mesh characteristics of Lagrange finite element functions, which are a special type of piecewise polynomial functions, the authors in <cit.> demonstrated that Lagrange finite element functions of any order in arbitrary dimensions can be represented by DNNs with ReLU or ReLU^2 activation functions. On the other hand, the approximation theory of shallow and deep neural networks with ReLU and ReLU^k activation functions has been studied from various aspects. For functions from the Barron space, it has been proved in <cit.> that shallow ReLU networks uniformly with the rate 𝒪(N^-1/2-1/d) with 𝒪(N) parameters. This result has been generalized to L^p-approximation, spectral Barron spaces, and ReLU^k activation functions in <cit.>. In particular, a optimal rate 𝒪̃(N^-1/2-2k+1/2d) for functions from the so-called spectral Barron spaces, where 𝒪̃ indicates up to a constant power of log N. Furthermore, for α-Hölder functions, <cit.> proved the approximation rate 𝒪̃(N^-α/dd+2/d+4) by shallow ReLU networks. This result is generalized to shallow ReLU^k networks for k=0,1,2,… in <cit.>. In particular, for k=0,1, the rate is improved to the optimal rate 𝒪(N^-α/d). Compared to shallow neural networks, deep network architectures have significant advantages as demonstrated in <cit.>. Specifically, by using a bit-extraction technique, it has been proved that fully-connected deep ReLU networks with depth L and width N attain the optimal approximation rate 𝒪̃(N^2L^2) <cit.>. For functions with specific compositional structures, deep ReLU networks can achieve significantly better expressivity power or approximation accuracy than their shallow ReLU counterparts <cit.>. In particular, it has been shown that any function constructed by a shallow ReLU neural network can be represented by a deep ReLU neural network with width d+4 <cit.> or a deep convolutional neural network whenever the filter size ≥2 <cit.>. In addition, it was demonstrated in <cit.> that sawtooth functions on [0,1] with 2^N oscillations can be represented by a deep ReLU network with 𝒪(N) parameters. In contrast, a shallow ReLU network would require 𝒪(2^N) parameters to construct such a function. Subsequent studies in <cit.> showed that multivariate polynomials could be approximated by deep ReLU networks exponentially, achieving an approximation rate of 𝒪̃(N^-α/d) for functions from C^α(^d). Later, a further study of approximating holomorphic functions by deep ReLU networks has been introduced in <cit.>. Additionally, <cit.> demonstrated that the sparse grid basis can be exponentially approximated by deep ReLU networks, leading to a suboptimal approximation rate of 𝒪̃(N^-2) for functions from Korobov spaces. However, a foundational aspect of these results is a special expressivity property of deep ReLU networks. A more in-depth and systematic analysis of this result, from the hierarchical basis perspective, is presented in <cit.>. Given these findings, it is reasonable to anticipate that deep ReLU^k networks might also offer superior representation and approximation properties compared to their shallow counterparts and deep ReLU networks. However, the expressivity and approximation properties of neural networks exclusively using the ReLU^k activation function are not extensively documented in the literature. For example, it has been noted in studies such as <cit.> and <cit.> that ReLU^k networks can represent any global polynomial, with existence proofs provided to support this claim. Furthermore, the authors in <cit.> constructed a deep ReLU^2 network with 𝒪(n^d) neurons, demonstrating its capability to represent all polynomials of degree ≤ n on ℝ^d. In this paper, we generalize this to all ReLU^k networks, providing a constructive proof for polynomial representation and estimating the upper bound of parameter size and count. Specifically, we show that shallow ReLU^k networks with 𝒪(k^d) parameters can represent polynomials of degree at most k in ℝ^d, and that deep ReLU^k networks of depth L with 𝒪(k^L d) parameters can represent polynomials of degree ≤ k^L. Moreover, deep ReLU^k networks can approximate analytic functions and functions from Sobolev spaces with suboptimal rates. Additionally, we prove that deep ReLU^k networks with depth L and width 2(k+1)n can represent functions constructed by any shallow ReLU^k^ℓ networks of width n for any ℓ≤ L. Together with recent results on variation spaces <cit.>, we demonstrate an approximation error of 𝒪(N^-1/2-2k^ℓ+1/2d) for functions from the variation space 𝒦_1(ℙ_k^ℓ^d) for any ℓ≤ L. This underscores the adaptability of deep ReLU^k networks in approximating functions from variation spaces without precise knowledge of their regularity, providing new insights into the advantages of deep architectures with ReLU^k activations. The rest of this section will formally define our ReLU^k network architectures, including bounds on parameter size and count. Section <ref> presents the constructive proof for global polynomial representation by ReLU^k networks and discusses their approximation properties for analytic functions and functions from Sobolev spaces. Section <ref> explores the adaptive approximation properties of deep ReLU^k networks for functions in variation spaces. Section <ref> offers concluding remarks and reflections on our findings. Let k∈, then we denote the univariate function ReLU^k as σ_k(x)={[ x^k, x≥0,; 0, x<0. ]. In particular, the ReLU function is the ReLU^k function with k=1. For any vector v=(v_1,…,v_n), we abuse the notation and denote the vector mapping σ_k(v)=(σ_k(v_1),…,σ_k(v_n)) ∈ℝ^n. Let d,k,n∈^+, B>0, ^d be the unit ball of ^d centered at the origin. Denote Σ_n^k(B) be the class of shallow ReLU^k network on ^d Σ_n^k(B)={x↦ c·σ_k(Ax+b): A∈^n× d, b∈^n, c∈^n, A_max≤ B, b_∞≤ B}. where A_max:=max_1≤ i≤ n 1≤ j≤ d|(A)_ij|. In addition, if we suppose further in (<ref>) that c_∞<M for some M>0, we will denote the set as Σ_n,M^k(B). This coincides with the notation in the previous works <cit.>. The class of fully-connected deep ReLU^k networks with depth L and width {n_i}_i=1^L is defined inductively as Σ_n_1:L^k(B)={x↦ c· h_L(x):h_i+1(x)=σ_k(A_ih_i(x)+b_i), A_i∈^n_i× n_i-1, b_i∈^n_i, c∈^n_L, A_i_max≤ B, b_i_∞≤ B, i=1,…,L}, where n_0=d, h_0(x)=x. Similarly, if we further require the weights in the output layers to be bounded as c_∞≤ M, then we denote the class as Σ_n_1:L,M^k(B). Before introducing our results, we specify that the number of parameters in the fully connected deep ReLU^k network is n_L+∑_i=1^Ln_i(n_i-1+1). However, deep neural network structures are often considered to be sparse (e.g.,<cit.>), i.e., most of the parameters are fixed to be 0. Such a network is called a sparse deep network. The sparsity remarkably decreases the number of parameters and, thus, the computation cost. The deep ReLU^k networks we constructed in this paper are sparse neural networks. § REPRESENTING POLYNOMIALS In this section, we construct the ReLU^k neural networks that represent polynomials. We will first show how shallow ReLU^k networks represent polynomials of degree bounded by k, and extend the result to deep networks. As a consequence, we obtain suboptimal approximation rates for analytic functions and functions from Sobolev spaces. The following lemma shows how monomials with the form x^α of degree n are constructed by n-th powers of linear combinations of (x_1,…,x_d) (see, e.g., <cit.>). It has been known for many years that the homogeneous polynomial space of degree n is a span of such linear combinations. However, our following lemma construct it explicitly, which allows an estimation of the coefficients. Let n,d∈^+, and α=(α_1,…,α_d)∈^d with α_1+…+α_d=n, then the monomial x^α can be written as a linear combination x^α=∑_n_2,…,n_d=-⌊n/2⌋^n-⌊n/2⌋c_n_2,…,n_d(x_1+n_2x_2+…+n_dx_d)^n with each c_n_2,…,n_d∈[-(n/2+1)^2d,(n/2+1)^2d]. We prove this inductively on d. Clearly, this is true for d=1. We assume the result is true for d-1, thus x_1^α_1… x_d-1^α_d-1=∑_n_2,…,n_d-1=-⌊j/2⌋^j-⌊j/2⌋c_n_2,…,n_d-1(x_1+n_2x_2+…+n_d-1x_d-1)^j, where j=n-α_d and c_n_2,…,n_d-1∈[-(n/2+1)^2(d-1),(n/2+1)^2(d-1)]. Let ξ(x) be any function of x and consider (ξ(x)+n_dx_d)^n for n_d=-⌊n/2⌋,…,n-⌊n/2⌋, then we can write (ξ(x)+n_dx_d)^n=∑_i=0^nnin_d^n-iξ(x)^ix_d^n-i. Consequently, for any b=(b_0,…,b_n)^T ∈ℝ^n+1, we have ∑_s=0^nb_s[ξ(x)+(s-⌊n/2⌋)x_d ]^n=b^TB_nλ where λ=(ξ(x)^n,x_dξ(x)^n-1,…,x_d^n-1ξ(x),x_d^n)^⊤, and B_n= [[ n01n1(-⌊n/2⌋) …nn(-⌊n/2⌋)^n; n01 n1(1-⌊n/2⌋) … nn(1-⌊n/2⌋)^n; ⋮ ⋮ ⋱ ⋮; n01 n1(n-⌊n/2⌋) … nn(n-⌊n/2⌋)^n ]] = [[ 1(-⌊n/2⌋) …(-⌊n/2⌋)^n; 1 (1-⌊n/2⌋) … (1-⌊n/2⌋)^n; ⋮ ⋮ ⋱ ⋮; 1 (n-⌊n/2⌋) … (n-⌊n/2⌋)^n ]][[ n00…0;0 n1…0;⋮⋮⋱⋮;00… nn ]]. Clearly, B_n is invertible. Let b̂=e_n-j+1^TB_n^-1 with e_n-j+1=(0,…,0_n-j,1,0,…,0_j)^T and ξ(x)=x_1+n_2x_2+…+n_d-1x_d-1, then we have ∑_s=0^nb̂_s^T[x_1+n_2x_2+…+n_d-1x_d-1+(s-⌊n/2⌋)x_d]^n=b̂^T B_nλ=e_n-j+1λ=ξ(x)^jx_d^n-j = (x_1+n_2x_2+…+n_d-1x_d-1)^jx_d^α_d. Substituting (<ref>), we get ∑_n_2,…,n_d-1=-⌊j/2⌋^j-⌊j/2⌋c_n_2,…,n_d-1∑_s=0^nb̂_s[x_1+n_2x_2+…+n_d-1x_d-1+(s-⌊n/2⌋)x_d]^n = ∑_n_2,…,n_d-1=-⌊j/2⌋^j-⌊j/2⌋c_n_2,…,n_d-1(x_1+n_2x_2+…+n_d-1x_d-1)^jx_d^α_d = x_1^α_1… x_d-1^α_d-1x_d^α_d. For n_2,…,n_d∈{-⌊n/2⌋,…,n-⌊n/2⌋}, denote c_n_2,…,n_d:={[ c_n_2,…,n_d-1b̂_n_d+⌊n/2⌋, n_2,…,n_d-1∈{-⌊j/2⌋,…,j-⌊j/2⌋}; 0, otherwise, ]. then ∑_n_2,…,n_d=-⌊n/2⌋^n-⌊n/2⌋c_n_2,…,n_d(x_1+n_2x_2+…+n_dx_d)^n=x_1^α_1… x_d^α_d. Finally, let us consider the bounds of c_n_2,…,n_d. Here, we only need to estimate the bounds of b̂_s. Since b̂=e_n-j+1^TB_n^-1, for s=0,…,n, we have |b_s|≤B_n^-1_max=max_1≤ i,k≤ n+1|(B_n^-1)_ik|. We can bound B_n^-1_max by the norm of the inverse of the Vandermonde matrix in (<ref>). Then by <cit.>, we have B_n^-1_max≤ ∏_s=0^n(1+|s-⌊n/2⌋|)/min_0≤ i≤ n{(1+|i-⌊n/2⌋|)∏_s≠ i|s-i|}=∏_s=0^n(1+|s-⌊n/2⌋|)/(1+|0|)∏_s≠⌊n/2⌋|s-⌊n/2⌋| = (∏_m=1^n-⌊n/2⌋1+m/m)×1×(∏_m=1^⌊n/2⌋1+m/m)≤(n/2+1)^2. Thus |c_n_2,…,n_d|≤|b_s||c_n_2,…,n_d-1|≤(n/2+1)^2(n/2+1)^2(d-1)≤(n/2+1)^2d. This completes the proof of Lemma <ref>. By noticing that t^k=σ_k(t)+(-1)^kσ_k(-t), t∈, Lemma <ref> enables the representation of the polynomials of degree ≤ k by shallow ReLU^k neural networks, which is also the key to showing the expressively of deep ReLU^k neural networks. In fact, the choice of n_2,…,n_d in the proof of Lemma (<ref>) is not the only choice. From (<ref>), we can observe that our proof relies on writing B_n as a product Vandermonde matrix and a diagonal matrix. The attempt to lessen the bound (<ref>) resulted in our choice of n_2,…,n_d. However, we specify that our choice does not necessarily lead to the optimal upper bound M in the next lemma. Let d,k∈^+, k≥2, N=2(k+1)^d, and B>0, then any polynomial P(x)=∑_{α∈^d: α_1≤ k}a_α x^α, x∈^d with degree ≤ k can be represented by the shallow ReLU^k network Σ_N,M^k(B), where M=B^-k(k/2+1)^2(d+1)+k(∑_{α∈^d: α_1≤ k}|a_α|). By applying Lemma <ref> with d← d+1 and n← k, we conclude any monomial of degree k can be represented as x_1^α_1… x_d^α_dx_d+1^α_d+1=∑_n_2,…,n_d=-⌊k/2⌋^k-⌊k/2⌋c_n_2,…,n_d+1(x_1+n_2x_2+…+n_dx_d+n_d+1x_d+1)^k. Replacing x_d+1 by 1, then all monomials of degree ≤ k on variables (x_1,…,x_d) can be represented as x_1^α_1… x_d^α_d=∑_n_2,…,n_d=-⌊k/2⌋^k-⌊k/2⌋c_n_2,…,n_d(x_1+n_2x_2+…+n_dx_d+n_d+1)^k with c_n_2,…,n_d+1=c_n_2,…,n_d+1(α)∈[-(k/2+1)^2(d+1),(k/2+1)^2(d+1)]. Then any polynomial P(x)=∑_{α∈^d: α_1≤ k}a_α x^α can be written as P(x)= ∑_{α∈^d: α_1≤ k}a_α∑_n_2,…,n_d+1=-⌊k/2⌋^k-⌊k/2⌋c_n_2,…,n_d+1(α)(x_1+n_2x_2+…+n_dx_d+n_d+1)^k = ∑_n_2,…,n_d=-⌊k/2⌋^k-⌊k/2⌋β_n_2,…,n_d+1(x_1+n_2x_2+…+n_dx_d+n_d+1)^k where β_n_2,…,n_d+1= ∑_{α∈^d: α_1≤ k}a_α c_n_2,…,n_d+1(α)≤∑_{α∈^d: α_1≤ k}|a_α|max_{α∈^d: α_1≤ k}|c_n_2,…,n_d+1(α)| ≤ (k/2+1)^2(d+1)(∑_{α∈^d: α_1≤ k}|a_α|). Thus, the polynomial P can be written as P(x)=∑_n_2,…,n_d=-⌊k/2⌋^k-⌊k/2⌋β_n_2,…,n_d+1 [σ_k(x_1+n_2x_2+…+n_dx_d+n_d+1). +.(-1)^kσ_k(-x_1-n_2x_2-…-n_dx_d-n_d+1)]. By noticing the inequality σ_k(ay)=a^kσ_k(y) for all a>0, y∈, we can write P as P(x)=∑_n_2,…,n_d=-⌊k/2⌋^k-⌊k/2⌋ B^-k⌈k/2⌉^kβ_n_2,…,n_d+1[σ_k(B⌈k/2⌉^-1(x_1+n_2x_2+…+n_dx_d+n_d+1)). +.(-1)^kσ_k(-B⌈k/2⌉^-1(x_1+n_2x_2+…+n_dx_d+n_d+1))]. Clearly, the parameters satisfy |B⌈k/2⌉^-1|≤ B and|B⌈k/2⌉^-1n_j|≤ B for all j=2,…,d+1, and |B^-k⌈k/2⌉^kβ_n_2,…,n_d+1|≤ B^-k(k/2+1)^2(d+1)+k(∑_{α∈^d: α_1≤ k}|a_α|) for all -⌊k/2⌋≤ n_2,…,n_d≤ k-⌊k/2⌋. This proves Lemma <ref>. Now, we are ready to prove our first main result, which reveals the way to construct polynomials by deep ReLU^k neural networks, as well as an estimation of the upper bound of the parameters. Let d,k,L∈^+, B>0, and n_1=…=n_ L=2(k^ L+1)^d, then any polynomial P(x)=∑_{α∈^d: α_1≤ k^L}a_α x^α, x∈^d with degree ≤ k^L can be represented by a deep ReLU^k network architecture Σ_n_1:L,M^k(B) with M=B^-kk^L-1/k-1(k^L/2+1)^2(d+1)+k^L(∑_{α∈^d: α_1≤ k^L}|a_α|). Furthermore, there are 2(2L+d)(k^L+1)^d nonzero parameters in the neural network architecture. By Lemma <ref>, P can be represented as P(x)=∑_j=1^2(k^L+1)^dc_jσ_k^L(w_j· x+b_j), where w_j∈[-1,1]^d, b_j∈[-1,1] and c_j∈[-M',M'] for j=1,…,2(k^L+1)^d, where M'=(k^L/2+1)^2(d+1)+k^L(∑_{α∈^d: α_1≤ k^L}|a_α|). Given h_0(x)=x, there exists a matrix A_1 with 2d(k^L+1)^d parameters and bias term b=(b_1,…,b_2d(k^L+1)) such that h_1,j(x)=σ_k(Bw_j· x+Bb_j)=B^kσ_k(w_j· x+b_j), j=1,…,2(k^L+1)^d. Given h_i(x)=B^kk^i-1/k-1(σ_k^i(w_1· x+b_1),…,σ_k^i(w_2(k^L+1)^d· x+b_2(k^L+1)^d))^⊤, the identity matrix A_i=BI_2(k^L+1)^d×2(k^L+1)^d with 2(k^L+1)^d parameters and bias term b=(0,…,0) give h_i+1(x)= σ_k(A_ih_i(x))=σ_k(B× B^kk^i-1/k-1h_i(x)), = (σ_k(B^k^i+1-1/k-1σ_k^i(w_1· x+b_1)),…,σ_k(B^k^i+1-1/k-1σ_k^i(w_2(k^L+1)^d· x+b_2(k^L+1)^d)))^⊤, = B^kk^i+1-1/k-1(σ_k^i+1(w_1· x+b_1),…,σ_k^i+1(w_2(k^L+1)^d· x+b_2(k^L+1)^d))^⊤. Inductively, we have h_L(x)=B^kk^L-1/k-1(σ_k^L(w_1· x+b_1),…,σ_k^L(w_2(k^L+1)^d· x+b_2(k^L+1)^d))^⊤. Thus, with c̃=B^-kk^L-1/k-1(c_1,…,c_2(k^L+1)^d), we have c̃· h_L(x)=∑_j=1^2(k^L+1)^dc_jσ_k^L(w_j· x+u_j)=P(x). Here, we have c̃_∞≤ B^-kk^L-1/k-1M'=M. This implies P∈Σ_n_1:L^k(M) with n_1=…=n_L=2(k^L+1)^d. There are 2d(k^L+1)^d parameters in A_1 and 2(k^L+1)^d parameters in A_2,…,A_L. The number of parameters in bias terms is L×2(k^L+1)^d, and 2(k^L+1)^d parameters in c̃. As we mentioned before, the existence of such representation of the polynomials has been mentioned in previous literature such as <cit.>. Those results rely on a generalized Vandermonde matrix theory for high-dimensional cases, while our proof in Lemma <ref> reduces the problem into 1-dimension Vandermonde matrices. This improvement allows an estimation of the coefficients in the linear combination (<ref>), and hence the parameters in the neural networks. As a consequence of the classical polynomial approximation theory (e.g. <cit.>), we can deduce the rate of approximation for analytic functions and for functions from Sobolev spaces. Let d∈^+ and 1≤ p≤∞. For r∈, the Sobolev space W^r,p(^d) is defined as W^r,p(^d)={f∈ L^p(^d): max_{α∈^d,α_1=r}D^α f_L^p(^d)<∞} with the norm given by f_W^r,p(^d)^p=f_L^p(^d)^p+∑_{α∈^d,α_1=r}D^α f_L^p(^d)^p, f∈ W^r,p(^d). For r∉, let θ=r-⌊ r⌋. The Sobolev semi-norm is |f|_W^r,p(^d)={[ max_{α∈^d,α_1=r}(∫_^d×^d|D^α f(x)-D^α f(y)|^p/|x-y|^d+θ pdxdy)^1/p, 1≤ p<∞,; max_{α∈^d,α_1=r}sup_x,y∈^d,x≠ y|D^α f(x)-D^α f(y)|/|x-y|^θ,p=∞. ]. The space W^r,p(^d) is defined as W^r,p(^d)={f∈ W^⌊ r⌋,p(^d): |f|_W^r,p(^d)<∞} with the norm f_W^r,p(^d)=f_W^⌊ r⌋,p(^d)+|f|_W^r,p(^d). Let d∈^+, ρ∈(0,1), f is said to be an analytic function on U_ρ:={z∈^d: |z_j+√(z_j-1)|<ρ^-1, j=1,…,d} if it is complex differentiable at each z∈ U_ρ. Let d,k,L∈^+, B>0, and n_1=…=n_L=2(k^L+1)^d. There exists a sparse deep ReLU^k network Σ_n_1:L^k(B) with 2(2L+d)(k^L+1)^d parameters such that (a) if f∈ W^r,p(^d) for some r>0, then inf_g∈Σ_n_1:L^k(B)f-g_L^p(^d)≤inf_g∈𝒫_d^k^Lf-g_L^p(^d)∼f_W^r,p(^d)k^-L r; (b) if f is analytic on U_ρ:={z∈^d: |z_j+√(z_j-1)|<ρ^-1, j=1,…,d}, then for any ϵ>0 with ρ+ϵ<1, inf_g∈Σ_n_1:L^k(B)f-g_L^p(^d)≤inf_g∈𝒫_d^k^Lf-g_L^p(^d)≲(max_w∈∂Γ_ρ+ϵ|f(w+w^-1/2)|)k^Ld(ρ+ϵ)^k^L, where Γ_ρ+ϵ:={w∈^d: |w_j|≤(ρ+ϵ)^-1, j=1,…,d}. The corresponding constant depends only on d,ρ, and ϵ. (a) The first inequality is exactly Theorem <ref>. The relation inf_g∈𝒫_d^k^Lf-g_L^p(^d)∼f_W^r,p(^d)k^-L r is the classical polynomial approximation theory (see, e.g., <cit.>). (b) Again, the first inequality is Theorem <ref>. For the second inequality, we apply the classical estimation for analytic functions (see, e.g., <cit.> and <cit.> for the multivariate version) and conclude inf_g∈𝒫_d^k^Lf-g_L^2(^d)^2≤ (max_w∈∂ U_ρ+ϵ|f(w+w^-1)|)^2∑_n=k^L+1^∞n+d-1d-1(ρ+ϵ)^2n ≤C^2(max_w∈∂ U_ρ+ϵ|f(w+w^-1)|)^2k^2L(ρ+ϵ)^2k^L, where C=C(d,ρ,ϵ) is independent of f and L. § ADAPTIVE APPROXIMATION FOR FUNCTIONS FROM VARIATION SPACES In this section, we show the adaptive approximation properties of deep ReLU^k neural networks for functions in variation spaces. This result is obtained by studying the representation power of deep ReLU^k neural networks for shallow networks. Let d,k,L,n∈^+, M>0, and n_1=…=n_L=2(k+1)n. Then there exists a deep ReLU^k neural network architecture Σ_n_1:L,M'^k(B') with N=[(4L-2)(k+1)+d]n nonzero parameters, such that all functions f∈Σ^K_n,M(B) with K∈{k^1,…,k^L} can be represented this neural network. Furthermore, the parameters in Σ_n_1:L,M'^k(B') are bounded as M'=(k/2+1)^4M, B'=B∨(k/2+1)^4. Let 1≤ℓ≤ L, K=k^ℓ, and f∈Σ^K_n,M(B). Then f_n(x)=∑_m=1^nc_mσ_ K(w_m· x+u_m), x∈^d, where |c_m|≤ M, |u_m|≤ B, and w_m_∞≤ B holds for m=1,…,n. Given h_0(x)=x, there exists a sparse matrix A_0∈^n_1× d with (A_0)_i,j=0 for i>n and b_0=(u_1,…,u_n)^⊤ such that h_1,j(x)={[ σ_k(w_j· x+u_j),j≤ n,; 0, otherwise. ]. Clearly, all the parameters in A_0 and b_0 are bounded by B'. Given h_i,j(x)={[ σ_k^i(w_j· x+u_j),j≤ n,; 0, otherwise, ]. let A_i=B'I_2(k+1)n×2(k+1)n, and b_i=(0,…,0)^⊤, h_i+1,j(x)={[ σ_k^i+1(w_j· x+u_j),j≤ n,; 0, otherwise. ]. So we get h_ℓ,j(x)={[ σ_K(w_j· x+u_j),j≤ n,; 0, otherwise. ]. By (<ref>), the function y↦ y can be represented as a linear combination y=∑_t=0^ka_t(y+t-⌊k/2⌋)^k=∑_t=0^ka_t[σ_k(y+t-⌊k/2⌋)+(-1)^kσ_k(-y-t+⌊k/2⌋)] with |a_t|≤(k/2+1)^4. At the ℓ-th layer, there exists a sparse matrix A_ℓ∈^n_ℓ+1× n_ℓ with (A_ℓ)_2(k+1)(m-1)+2t+1,m=1,(A_ℓ)_2(k+1)(m-1)+2t+2,m=-1, for m=1,…,n, t=0,…,k and proper bias term b_ℓ with b_ℓ_∞≤k/2+1 such that h_ℓ+1,2(k+1)(m-1)+2t+1(x)=σ_k(σ_L(w_m· x+u_m)+t-⌊k/2⌋)h_ℓ+1,2(k+1)(m-1)+2t+2(x)=σ_k(-σ_L(w_m· x+u_m)-t+⌊k/2⌋). for m=1,…,n, t=0,…,k. At the (ℓ+1)-th layer, there exists a sparse matrix A_ℓ+1∈^n_ℓ+2× n_ℓ+1 with (A_ℓ+1)_m,2(k+1)(m-1)+2t+1=a_t,(A_ℓ+1)_m,2(k+1)(m-1)+2t+2=(-1)^ka_t and b_ℓ+1=(0,…,0)^⊤ such that h_ℓ+2,j(x)= ∑_t=0^ka_t[σ_k(σ_L(w_j· x+u_j)+t-⌊k/2⌋)+(-1)^kσ_k(-σ_L(w_j· x+u_j)-t+⌊k/2⌋)] = σ_L(w_j· x+u_j), j≤ n, h_ℓ+2,j(x)= 0,otherwise. We repeat these two steps up to the L-th layer, then the last hidden layer h_L(x) is given as either h_L,j(x)= σ_K(w_j· x+u_j), j≤ n, h_L,j(x)= 0,otherwise or h_L,2(k+1)(m-1)+2t+1(x)=σ_k(σ_K(w_m· x+u_m)+t)h_L,2(k+1)(m-1)+2t+2(x)=σ_k(-σ_K(w_m· x+u_m)-t). for m=1,…,n, t=0,…,k. In either case, by (<ref>), there exists c∈[-(k/2+1)^4,(k/2+1)^4]^2(k+1)m such that c· h_L(x)=∑_m=1^nc_mσ_K(w_m· x+u_m)=f_n(x). By the form of the matrices A_0,…,A_L-1, the number of parameters is dn+2(k+1)n+∑_j=2^L[2(k+1)n+2(k+1)n]+2(k+1)n=N. This completes the proof of Lemma <ref>. This lemma enables us to consider the rate of approximation for functions from the variation spaces generated by ReLU^K activation functions with K=k,k^2,…,k^L. The approximation theory of such spaces has been studied in <cit.>, etc. Let K,d∈^+, denote the dictionary ℙ_K^d as ℙ_K^d={x↦σ_K(ω· x+b): ω∈𝕊^d-1, b∈[-1,1]}. Consider the closure of the convex, symmetric hull of ℙ_K^d Conv(ℙ_K^d)={∑_j=1^na_jh_j: n∈^+, ∑_j=1^n|a_j|≤1, h_j∈ℙ_K^d}, we define the variation space with the dictionary ℙ_L^d as 𝒦_1(ℙ_K^d):={f∈ R×Conv(ℙ_K^d): R>0}. with the norm f_𝒦_1(ℙ_K^d)=inf{R>0: f∈ R×Conv(ℙ_K^d)}. We will compare the result with the rate of approximation of shallow ReLU^k networks given in <cit.>. Let k,d∈^+, then sup_f∈𝒦_1(ℙ_k^d)inf_f_n∈Σ_n^k(2)f-f_n_L^2(^d)≲f_𝒦_1(ℙ_k^d)n^-1/2-2k+1/2d. Combining this theorem with Lemma <ref>, we have the following theorem. Let k,n,L∈^+, and n_1=…=n_L=2(k+1)n, then there exists a deep ReLU^k network architecture Σ_n_1:L^k(B) with B=(k/2+1)^4 such that for any K=k^ℓ with 1≤ℓ≤ L, sup_f∈𝒦_1(ℙ_K^d)inf_f_n∈Σ_n_1:L^k(B)f-f_n_L^2(^d)≤ Cf_𝒦_1(ℙ_K^d)n^-1/2-2K+1/2d, where N=[(4L-2)(k+1)+d]n, C is a constant independent of n. The index K in the variation spaces defined can be correlated to the regularity of the functions in the spaces (cf. <cit.>). Suppose the target function has a large but unknown regularity, saying f∈∩_j=1^K𝒦_1(ℙ_j^d), then the approximation property of shallow ReLU^k networks would be bad if the activation function is not chosen properly: If k ≪ K, then the approximation rate that we can achieve is at most sup_f∈𝒦_1(ℙ_k^d)inf_f_n∈Σ_n^kf-f_n_L^2(^d)≲f_𝒦_1(ℙ_k^d)n^-1/2-2k+1/2d. If k > K, ∩_j=1^K𝒦_1(ℙ_j^d)⊈𝒦_1(ℙ_k^d), the approximation rate is unknown. However, by taking the depth L≥⌈log_kK⌉, the deep ReLU^k networks we constructed in this section can automatically approximate functions from ∩_j=1^K𝒦_1(ℙ_j^d) with this unknown K. The rate of approximation is at least sup_f∈𝒦_1(ℙ_K^d)inf_f_n∈Σ_n_1:L^kf-f_n_L^2(^d)≤ Cf_𝒦_1(ℙ_K^d)n^-1/2-2K+1/2d, since k^⌈log_kK⌉≥ K. § CONCLUSIONS In conclusion, this paper has significantly advanced the understanding of ReLU^k neural networks and their capabilities in function approximation and representation. Our constructive proofs have not only demonstrated the representational power of shallow ReLU^k networks in polynomial approximation but also paved the way for constructing deep ReLU^k networks with an explicit parameterization for any polynomial of degree less than k^L, scaling efficiently with network depth and dimensions. Furthermore, we established a connection between the coefficients of polynomials and the bounds of neural network parameters, thereby enabling precise estimations of network parameters that uphold the approximation integrity. The elucidation of deep ReLU^k networks' ability to approximate functions from Sobolev spaces, albeit with a suboptimal rate, opens new avenues for exploring their application in complex functional spaces. Most notably, our construction that deep ReLU^k networks can emulate the function representation of shallower networks with enhanced efficiency underscores the hierarchical strength of depth in neural architectures. Coupled with recent findings on variation spaces, our results underscore the adaptivity and potential of deep ReLU^k networks to achieve significant approximation accuracy, contributing to the theoretical foundations that may stimulate further breakthroughs in the application of deep learning across varied domains. abbrv | http://arxiv.org/abs/2312.16483v2 | {
"authors": [
"Juncai He",
"Tong Mao",
"Jinchao Xu"
],
"categories": [
"cs.LG",
"cs.NA",
"cs.NE",
"math.NA"
],
"primary_category": "cs.LG",
"published": "20231227091114",
"title": "Expressivity and Approximation Properties of Deep Neural Networks with ReLU$^k$ Activation"
} |
[email protected] Institute for Advanced Research, Nagoya University, Furo-cho Chikusa-ku, Nagoya 464-8601, Japan Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Chikusa-ku, Nagoya, 464-8602, Japan Sorbonne Université, CNRS, UMR7095, Institut d'Astrophysique de Paris, 98bis boulevard Arago, F-75014 Paris, FranceSchool of General and Management Studies, Suwa University of Science, Chino, Nagano 391-0292, JapanTheory Center, Institute of Particle and Nuclear Studies, KEK, Tsukuba, Ibaraki 305-0801, Japan School of Natural Sciences, Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540, USAAcademia Sinica Institute of Astronomy and Astrophysics (ASIAA), No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan Kavli Institute for the Physics and Mathematics of the Universe (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan Primordial magnetic fields (PMFs) are one of the plausible candidates for the origin of the observed large-scale magnetic fields. While many proposals have been made for the generation mechanism of PMFs by earlier studies, it remains a subject of debate. In this paper, to obtain new insights into PMFs, we focus on the intrinsic alignments (IAs) of galaxies induced by the vector and tensor modes of the anisotropic stress of PMFs.The long-wavelength vector and tensor modes locally induce the tidal gravitational fields, leading to the characteristic distortions of the intrinsic ellipticity of galaxies. We investigate the shear E- and B-mode power spectra induced by the magnetic vector and tensor modes in the three-dimensional space, assuming the combination of galaxy imaging and galaxy redshift surveys. We find that the magnetic tensor mode dominates both the E- and B-mode spectra. In particular, the B-mode spectrum induced by the magnetic tensor mode plays a crucial role in constraining the amplitude of PMFs, even in the presence of the non-magnetic scalar contribution to the B-mode spectrum arising from the one-loop effect. In future galaxy redshift surveys, such as Euclid and Square Kilometre Array, the minimum detectable value reaches ∼ 30 nG, which can potentially get even smaller in proportion to the number of observed galaxies and reach ∼𝒪(1 nG). Measuring the IAs of galaxies would be a potential probe for PMFs in future galaxy surveys. Imprints of primordial magnetic fields on intrinsic alignments of galaxies Teppei Okumura January 14, 2024 ==========================================================================§ INTRODUCTIONRecent observations of high-energy TeV photons emitted from the distant blazars suggest the existence of large-scale magnetic fields, especially in intergalactic and void regions <cit.>. For instance, Ref. <cit.> has reported the lower bound 3× 10^-16 Gauss on the amplitude of intergalactic magnetic fields. Although the origin of such magnetic fields remains an open question, a number of theories have been proposed to explain them.An interesting scenario attracting much attention is the primordial origin, in which the primordial magnetic fields (PMFs) are generated in the early universe, especially before the cosmic recombination epoch. Since PMFs are generated before the formation of stars or galaxies, we expect to observe PMFs as large-scale magnetic fields, not associated with astronomical objects (see e.g. <cit.> for reviews). There exists a variety of models for the generation of PMFs.In the presence of an interaction between electromagnetic fields and other fields that breaks the conformal invariance during inflation, the inflationary magnetogenesis takes place from the quantum fluctuations <cit.>. The coherent length of PMFs generated in this way can be beyond the horizon scales. During cosmological phase transitions, the bubble collisions and turbulence in the primordial plasma result in generation of PMFs <cit.>. In the simple phase transition models, the coherent length of PMFs is generally shorter than the observed intergalactic magnetic field. However, recent works <cit.> have proposed the model that can generate PMFs with sufficiently long coherent length. In post-inflationary epochs, the Harrison mechanism <cit.> results in PMFs at O( Mpc) scales in the primordial plasma. However, the amplitude is about 10^-24 Gauss <cit.>, which is smaller than the observed amplitude.To distinguish magnetogenesis models through observations, many authors have investigated the impact of PMFs on cosmological observables, for instance, the effects of PMFs on the Big Bang Nucleosynthesis <cit.>, cosmic microwave background (CMB) anisotropies <cit.>, CMB spectral distortions <cit.>, and large-scale structure of the Universe <cit.>. While cosmological observations to date have provided some clues to the generation mechanism of PMFs, it is also worth exploring other ways to extract further information from future cosmological observations. Motivated by the above, this paper focuses on the intrinsic alignments (IAs) of galaxy shapes as a novel probe for PMFs. In weak gravitational lensing observations, the IAs of galaxies have been recognized as a contaminant to the estimation of cosmological parameters <cit.> (also see Ref. <cit.> for a review).However, it has been shown that the IAs of galaxies offer a unique opportunity to constrain the cosmological parameter, growth of the large-scale structure and the initial condition of the Universe, complementary to galaxy clustering <cit.>. More interestingly, using the galaxy samples in the Sloan Digital Sky Survey, Ref. <cit.> has measured the anisotropic signals of the IA due to redshift-space distortions and indeed used the signals to constrain the growth rate of the Universe. How do PMFs leave their imprints on the IA? Refs. <cit.> have shown that the long-wavelength vector and tensor metric perturbations induce the short-wavelength gravitational tidal fields, called the fossil effect. Such tidal fields are expected to affect the intrinsic shape of galaxies, leading to the IAs. The imprints of primordial gravitational waves (GWs) on the IAs have been analytically and numerically investigated <cit.>. Very recently, assuming the photometric surveys, Ref. <cit.> has investigated the impact of the primordial vorticity vector mode <cit.> and primordial GWs on the IAs. Since our primary interest lies in the IAs as a new probe of PMFs, this paper focuses on the vector and tensor modes induced by PMFs. The anisotropic stress fluctuation of PMFs creates additional metric perturbations on standard non-magnetic contributions <cit.>. The resultant metric perturbations source the IAs. As for the scalar mode, the magnetic contribution is not remarkable and hidden by the non-magnetic one; thus, we shall not investigate it below. By elucidating the relation between the anisotropic stress of PMFs and the IAs, we derive the analytical expression of the E- and B-mode power spectra of the intrinsic ellipticity of galaxies induced by the vector and tensor modes of PMFs.Exploiting the derived analytical expression, we explore the potential to constrain the amplitude of PMFs in the Euclid and Square Kilometre Array (SKA) galaxy redshift survey as well as more idealistic cases.This paper is organized as follows. In Sec. <ref>, we briefly review the general properties of PMFs and present how the anisotropic stress of PMFs induces the vector and tensor modes. For comparison purposes, we also introduce the vorticity vector mode and primordial GWs. In Sec. <ref>, based on Ref. <cit.>, we show the analytical expression of the intrinsic ellipticity shape induced by the long-wavelength vector and tensor modes. We show the detailed derivations in Appendix <ref>. In Sec. <ref>, we derive an analytical expression for the three-dimensional E- and B-mode power spectra, and see their typical behavior. Using the analytical expression of the E- and B-mode power spectra, we perform the Fisher matrix computation and derive the expected minimum detectable value of the amplitude of PMFs in future surveys in Sec. <ref>. In our analysis, we take into account the non-magnetic scalar mode contributions to the E- and B-mode power spectra, respectively arising from the leading-order and one-loop order effects. We present the way to compute the one-loop B-mode spectrum in Appendix <ref>. We perform the same analysis for the vorticity vector mode and primordial GWs in Appendix <ref>. Sec. <ref> is devoted to the summary of our findings.Throughout this paper, we apply the Einstein summation convention for repeated Greek indices and alphabets running from 0 to 3 and from 1 to 3, respectively. We work in units c=ħ=1.§ VECTOR AND TENSOR MODESWe are interested in imprints of the vector and tensor modes induced by PMFs on cosmological observables. In this section, we briefly introduce the basic property of the initial power spectrum of PMFs and the vector and tensor modes induced by PMFs in Sec. <ref>. We also introduce other possible cosmological sources to generate the vector and tensor modes in Sec. <ref> for comparison purposes.Throughout this paper, we work in the synchronous gauge of which the line element is ds^2 = a^2(η) [ -dη^2 + ( δ_ij + h_ij)dx^i dx^j] , where the quantities a and η are the scale factor and the conformal time, respectively. We decompose the metric perturbation into the vector and tensor modes based on the helicity basis in Fourier space: h_ij =∑_λ=±1O^(λ)_ijh^(λ)(k) + ∑_λ=±2O^(λ)_ijh^(λ)(k) , where the first and second terms represent the vector (λ=±1) and tensor (λ=±2) modes, respectively. Here we have defined O^(±1)_ij(k̂) = k̂_iϵ^(±1)_j(k̂) + k̂_jϵ^(±1)_i(k̂) , O^(±2)_ij(k̂) = ϵ^(±1)_i(k̂)ϵ^(±1)_j(k̂) , where the polarization vectors ϵ^(±1) satisfy the relations: k̂·ϵ^(±1) = 0, (ϵ^(±1))^* = ϵ^(∓1), and ϵ^(±1)·ϵ^(∓1) = 1.For the vector mode, we define the gauge invariant variable in Fourier space byσ_ij(k) ≡∑_λ=±1O^(λ)_ijh^(λ)'_ij(k)/k , where a prime denotes a derivative with respect to the conformal time η. With this definition, the helicity modes of σ_ij are given by σ^(±1) = h^(±1)'/k.§.§ Magnetic vector and tensor modesPMFs act as a source of the vector and tensor modes <cit.>. We first introduce the general property of PMFs. We then present the magnetic vector and tensor modes induced by PMFs.§.§.§ General property of PMFsWe consider the magnetically induced vector and tensor modes presented by Refs. <cit.>. We assume that the time evolution of physical magnetic fields B(a,x) is given by B(a,x) = B(x)/a^2 with B(x) being the comoving magnetic fields without the adiabatic decay due to the expansion of the Universe. This assumption is valid in the limit of infinite electrical conductivity of the Universe as in the early time.The power spectrum of the divergence-free vector such as PMFs is given by B_i(k)B^*_j(k') = (2π)^3δ^3_ D(k-k')P_ij(k̂)P_B(k) , where P_ij(k̂) = (δ_ij- k̂_ik̂_j)/2 and B_i(k) is the Fourier transform of B_i(x) given by B(x) = ∫ d^3k/(2π)^3B(k) e^ik·x . We model the power spectrum of the primordial magnetic field by the power-law form (see e.g. Ref. <cit.>): P_B(k) = B^2_λΓ( n_B+3/2)/4π^2λ^n_B+3k^n_BΘ(k_ D-k) , where the functions Γ(x) and Θ(x) are the Gamma function and the Heaviside theta function, respectively. The quantities B_λ and k_ D are the amplitude of PMFs smoothed over a comoving scale of λ = 1Mpc and the damping scale, respectively. We introduce the Heaviside theta function to express the damping scale. We use the damping scale k_ D given by Ref. <cit.>: k_ D = (2.9× 10^4)^1/n_B+5( B_λ/ nG)^-2/n_B+5 (2π)^n_B+3/n_B+5h^1/n_B+5, with h being the reduced Hubble constant. The power spectrum of PMFs contains two parameters, B_λ and n_B. The CMB observations provide the upper limits B_λ≲ O(1nG) depending on the value of n_B. A key quantity for investigating the cosmological impact of PMFs is the anisotropic stress and its power spectrum. The anisotropic stress of PMFs, Π_B, ij, is given by Π_B, ij(k)= -1/4πρ_γ,0∫ d^3 k_1/(2π)^3B_i(k_1)B_j(k-k_1) , where ρ_γ0 is the photon energy density at the present time. As with the metric perturbation, we decompose the anisotropic stress into the vector and tensor components by using the helicity basis: Π_B,ij(k) =∑_λ=±1O^(λ)_ij(k̂) Π^(λ)_B (k) +∑_λ=±2O^(λ)_ij(k̂) Π^(λ)_B (k) . The power spectra of the anisotropic stress for the vector and tensor modes are then given by <cit.> Π^(λ)_B (k) Π^(-λ)_B (k') = (2π)^3δ^3_ D(k-k') 1/2|Π^(X)_B|^2, where we denote X=V and X=T for the vector (λ = ±1) and tensor (±2) modes, respectively. Here we assume the unpolarized case, where the power spectra of the + and - modes are identical. In Eq. (<ref>), we define |Π^(V/T)_B(k)|^2 =c_V/T/4(4πρ_γ0)^2∫ d^3k_1/(2π)^3×P_B(k_1)P_B(k_2) D_V/T(k,k_1,μ) , where k_2 = k - k_1.In the above, c_V/T are constants, c_V = 1 and c_T = 1/2, and we have defined D_V(k,k_1,μ) = 1-2γ^2β^2+γβμ, and D_T(k,k_1,μ) = (1 + γ^2)(1+β^2), with μ = k̂_1·k̂_2, γ = k̂·k̂_1,β = k̂·k̂_2.The anisotropic stress defined in Eq. (<ref>) contributes to the energy-momentum tensor in the Einstein equation, and sources the fluctuations induced magnetically. In what follows, we briefly review the resultant vector and tensor fluctuations induced by the anisotropic stress of PMFs. §.§.§ Vector modeWe first introduce the magnetically induced vector mode. In the initial time, the anisotropic stress of PMFs is compensated by that of neutrinos. Hence, the total anisotropic stress is zero, but each component is perturbed. In this setup, Ref. <cit.> has derived the initial condition for the Einstein-Boltzmann equation and shown that all the perturbed variables are proportional to Π_B^(± 1)(k) in Eq. (<ref>) (see Appendix B in Ref. <cit.>). This contribution is known as compensated vector mode. Here, we define the transfer function of the vector mode by σ^(±1) (η, k) = 𝒯^(V)_B(η, k) Π^(±1)_B(k). We note that even though the magnetic field itself is a random Gaussian variable, the statistical property of the vector mode is highly non-Gaussian, as the anisotropic stress is proportional to the square of the magnetic field. We plot the time and wavenumber dependences of the transfer function in the top panels of Fig. <ref>. To compute the transfer function of the compensated vector mode 𝒯^(V)_B, we use the CAMB code <cit.>.[In the initial parameter file for the CAMB code, we setto output the transfer function of the compensated vector mode.] In the right panel, we show only the low-redshift results because we are interested in the late-time effect of PMFs on the galaxy IAs. In the limit a→ 0, in contrast to the usual adiabatic initial condition or primordial GWs cases, the transfer function asymptotically approaches zero. This is because the vector metric perturbation is sourced by the total anisotropic stress, which however initially cancels between the magnetic field and neutrinos.§.§.§ Tensor mode The tensor metric perturbation arises from the anisotropic stress of PMFs after its generation time. However, after the neutrino decoupling, the contribution to the total energy-momentum tensor from the anisotropic stress of PMFs is cancelled by the anisotropic stress of neutrinos. The generation of the tensor mode therefore ceases at the epoch of neutrino decoupling. This contribution is known as passive tensor mode <cit.>.The expression of generated tensor mode is given by <cit.>: h^(± 2)_ ini(k) =6 R_γln(η_ν/η_B)Π^(±2)_B(k), where we define an energy fraction of photons in the total radiation defined R_γ = ρ_γ/(ρ_γ+ρ_ν), and the quantities η_ν and η_B are the neutrino decoupling time and the PMF generation time in terms of the conformal time, respectively.The generation epoch η_B highly depends on the generation mechanism; therefore, η_ν/η_B has an ambiguity as 10^6≲η_ν/η_B≲ 10^17. In the following analysis, we adopt the maximum value η_ν/η_B = 10^17, corresponding to the grand unification energy scale <cit.>, while the change of h^(± 2)_ ini (and induced E/B-mode ellipticity field appearing below) is only by a factor of < 3 even if adopting other value. Once the tensor metric perturbation is generated from the anisotropic stress of PMFs, it evolves as the usual primordial GWs <cit.>. Therefore we may express the magnetic tensor mode at the time η by using the transfer function of the primordial GWs 𝒯_ GWs(k,η) by h^(± 2)(η, k) =6 R_γln(η_ν/η_B)𝒯_ GWs(k,η) Π^(±2)_B(k) , where 𝒯_ GWs(k,η) stands for the transfer function of the primordial GWs as explained in the next subsection. As with the magnetic vector mode, the magnetic tensor mode is proportional to the square of PMFs, suggesting that its statistical property is highly non-Gaussian. We remark that the tensor mode arises from the same mechanism in Sec. <ref>, i.e., compensated tensor mode. However, its amplitude are negligible compared to the tensor mode presented in this section (see e.g. Ref. <cit.>), and hence we ignore it throughout this paper. §.§ Other vector and tensor sourcesHere, we introduce other possible sources of vector and tensor modes: the (neutrino) vorticity vector mode and the primordial GWs. In the next section, we will compare the vector and tensor modes introduced in this subsection with those induced by PMFs. Another possible source of a vector mode is the (neutrino) vorticity vector mode <cit.>, in which, similar to the isocurvature initial conditions, the sum of the neutrino, baryon, and photon vorticities vanishes initially, but the vector metric perturbation remains constant due to the neutrino anisotropic stress (see Appendix 2 in Ref. <cit.>).Using the transfer function of the vorticity mode 𝒯^(V)_ω(η, k), the vector metric perturbation at the time η is given by σ^(±1) (η, k) = 𝒯^(V)_ω(η, k) σ^(±1)_ ini (k) , where σ^(±1)_ ini (k) being the primordial amplitude. We define its power spectrum by σ^(±1)_ ini (k) σ^(∓1)_ ini (k') = (2π)^3δ^3_ D(k-k')2π^2/k^3𝒫_σ^(±1)(k) . In the unpolarized case 𝒫_σ^(+1)(k) = 𝒫_σ^(-1)(k) = 𝒫_σ(k)/2, where 𝒫_σ(k) stands for total power spectrum defined by σ_ ini i (k) σ^*_ ini i (k') = (2π)^3δ^3_ D(k-k')(2π^2/k^3) 𝒫_σ(k). We parametrize the total power spectrum by the power-law form: 𝒫_σ (k) = r_V𝒜_S( k/k_*)^n_V, with 𝒜_S and k_* = 0.002Mpc being the amplitude of the usual non-magnetic scalar perturbation and the pivot scale. The shape of the power spectrum is controlled by the vector-to-scalar ratio r_V and spectral index n_V. Although finding a mechanism to source this mode is challenging, this initial condition is mathematically possible and has been indeed investigated by many authors, e.g., Refs. <cit.>. We show the behaviors of the transfer function in the middle two panels of Fig. <ref> by using the CAMB code <cit.>.[We setto output the transfer function of the vorticity mode in the initial parameter file.]Another possible source of a tensor mode is the usual primordial GWs generated during inflation from the quantum fluctuations. The tensor mode is formally given by using the transfer function 𝒯_ GWs(η, k) and initial fluctuation h^(±2)_ ini(k): h^(±2)(η, k) = 𝒯_ GWs(η, k) h^(±2)_ ini(k) . We define the power spectrum of the initial field by h^(±2)_ ini (k) h^(∓2)_ ini (k') = (2π)^3δ^3_ D(k-k')2π^2/k^3𝒫_h^(±2)(k) . We consider unpolarized case 𝒫_h^(+2)(k) = 𝒫_h^(-2)(k) = 𝒫_h(k)/2, where 𝒫_h(k) stands for total power spectrum defined by h_ ini ij (k) h^*_ ini ij (k') = (2π)^3δ^3_ D(k-k')(2π^2)/k^3 𝒫_h(k). We model the total power spectrum of the initial field by the power-law form: 𝒫_h (k) = r_T𝒜_S( k/k_*)^n_T, where r_T and n_T being the usual tensor-to-scalar ratio and spectral index, respectively. In the bottom panels in Fig. <ref>, we show the behaviors of the transfer function 𝒯_ GWs by numerically solving the evolution equation for the primordial GWs: 𝒯”_ GWs(η,k) + 2ℋ𝒯'_ GWs(η,k) + k^2𝒯_ GWs(η,k) = 0, with the initial conditions 𝒯_ GW(0,k) = 1 and 𝒯'_ GW(0,k) = 0. Here, we define the conformal Hubble parameter ℋ = a'/a.§ IMPACT OF THE VECTOR AND TENSOR MODES ON THE INTRINSIC ALIGNMENT We briefly review how the vector and tensor modes induce the IAs of galaxies.We leave the detailed derivation to the Appendix <ref>.The local physical effects of the long-wavelength vector and tensor modes on the gravitational potential have been investigated by using conformal Fermi normal coordinates in Refs. <cit.>.According to the results shown in Ref. <cit.> (or see Appendix <ref>), the tidal fields locally induced by the vector and tensor modes of the long-wavelength mode k_ L are, respectively, given by: τ^(V)_ij(η, k_ L) = - k_ L/2a( a σ_ij(η, k_ L))'=- k_ L/2a( a 𝒯^(V)(η,k_ L) )' σ_ iniij(k_ L),τ^(T)_ij(η, k_ L)= - 1/2a( a h'_ij(η,k_ L))'= - 1/2a( a 𝒯^(T)'(η,k_ L) )' h_ ini ij(k_ L), where we used the gauge invariant vector variable σ_ij(k_ L) = h'_ij(k_ L)/k_ L. From the first line to the second line in Eqs. (<ref>) and (<ref>), we decompose the perturbed metric into the time-dependent part described by the transfer function and the time-independent initial part. To derive the expression for the density field induced by the coupling between long- and short-wavelength modes, we solve the equation of motion of a matter particle in the local frame: x” + ℋx'= -∇_x(ϕ_ s + 1/2τ^(X)_ijx^ix^j) ,∇^2_xϕ_ s = 4π G a^2ρ̅_ mδ , with X = V and T for the vector and tensor modes, respectively. The quantity ϕ_ s stands for the scalar gravitational potential of the short-wavelength mode. To facilitate the computations, we employ the Lagrangian perturbation formalism (see Appendix <ref>). The second order solution arising from the short- and long-wavelength modes coupling is then given by δ^(sl)(x)= ξ_ iniab(k_ L) [ ( - D^(sl)(η,k_ L) /D(η)+ β(η,k_ L) ) ×∂^-2∂_a∂_b + β(η,k_ L) x_a∂_b] δ^(1)(x,η) , where ξ = σ or h for the vector or tensor modes, respectively. We define the linear growth factor D(η), which satisfies D”(η) + ℋD'(η) - 4π G a^2ρ̅_ m(η) D(η)= 0. The second-order growth factor D^(sl) satisfies D^(sl)'' + ℋ D^(sl)' - 4π G a^2ρ̅_ m(η) D^(sl) = S^(X)(η, k_ L)where the source terms S^(X) of the vector (X=V) and tensor (X=T) are, respectively, defined by S^(V)(η, k_ L)= -k_ L/2aD(η)(a𝒯^(V)(η,k_ L))' , S^(T)(η, k_ L)= -1/2a D(η) (a𝒯^(T)'(η,k_ L))' .Eq. (<ref>) allows us to estimate how the IA is induced by the long-wavelength vector and tensor modes.We use the same ansatz in Ref. <cit.>, in which we assume that the first term in the square brackets in Eq. (<ref>) induces the intrinsic galaxy shape, and the conversion from the second-order density fluctuations to the intrinsic galaxy shape in the vector and tensor modes has the same scaling as that in the scalar mode <cit.> (see Appendix <ref> for details). Finally, we have the expression of the intrinsic galaxy shape induced by the long-wavelength vector and tensor modes at the linear order given by γ_ij(k)= b_ K(η,k) ξ_ iniij(k) , where ξ = σ or h for the vector or tensor modes, respectively. Here we define the effective linear shape bias by b_ K(η,k)≡7/4( - D^(sl)(η,k)/D(η) + β(η,k) ) b^ scalar_ K . The quantity b_ K^ scalar is the linear shape bias induced by the scalar tides as in Ref. <cit.>. We omit the subscript _ L indicating the long-wavelength mode here. In Fig. <ref>, we show the behaviors of the effective linear shape bias parameter b_ K(η, k)/b^ scalar_ K.We notice that the effective linear shape bias sourced by the vector mode has the opposite sign to that by the tensor mode because of the different number of time derivatives in the sources (see Eqs. (<ref>) and (<ref>)). Taking the limit k→∞, the transfer function asymptotically approaches zero (see the right panels in Fig. <ref>). However the effective tidal bias parameter does not vanish in the same limit, known as the fossil effect <cit.>.Since the transfer function of the vector modes decays rapidly after the matter-radiation equality, the effective linear shape bias parameter induced by the vector mode, a fossil effect, is more quickly frozen than that by the tensor mode. Therefore, we do not see the redshift dependence of the effective tidal bias parameter in the top and bottom panels in Fig. <ref>. We note that, since the amplitude of the effective linear shape bias parameter is solely determined by the behavior of the transfer function, its amplitude can be larger than the linear shape bias induced by the scalar tides as seen in the vector mode cases. However, the net impact of the vector mode on the galaxy shape is generally much smaller than the scalar one. While Eq. (<ref>) is estimated based on the ansatz we mentioned before, it is non-trivial whether Eq. (<ref>) actually holds in the presence of long-wavelength vector/tensor modes. However, a recent numerical work <cit.> has investigated the validity of Eq. (<ref>) in the primordial GWs case, and confirmed that Eq. (<ref>) agrees with simulations at large scales. Although they also confirmed that the discrepancy between the ansatz and measurements becomes larger at smaller scales, this discrepancy does not qualitatively affect the estimates in this paper, and henceforth, we will carry out the analysis assuming that Eq. (<ref>) is valid for both the vector and the tensor modes. Numerical validation, especially for the vector mode, would be an interesting future work. § THREE-DIMENSIONAL E- AND B-MODE POWER SPECTRAFrom here, we move to the heart of this work, which involves the investigation of the three-dimensional power spectrum of the IA induced by the vector and tensor modes. While the vector and tensor can generally contribute to the lens-induced ellipticity (e.g. Refs. <cit.>), as we are interested in the characteristic signature of the intrinsic galaxy shapes from the magnetically induced vector and tensor modes, we simply ignore the lens-induced ellipticity but focus on the intrinsic ellipticity. The E- and B-mode decomposition is convenient to distinguish the impact of vector and tensor modes on the galaxy shape from that of the scalar mode because the leading-order scalar mode does not induce the B mode. There however exists a scalar-mode contribution to the B mode arising from the one-loop effect, which we will properly take into account in later section. We work under the plane-parallel limit and set the line-of-sight direction to be parallel to the z-axis: n̂ = ẑ. We define the shear E-mode and B-mode by _±2γ(k)e^∓2 iϕ_k = E(k) ± i B(k). where we define _±2γ(k)= m^i_∓ m^j_∓γ_ij(k). with m_λ = (1, -λ i , 0)/√(2). Recall the expression of the induced galaxy shape by the long-wavelength vector and tensor modes in Eq. (<ref>) γ_ij = b_ K(k) ξ_ iniij(k) , = b_ K(k) ∑_λ O^(λ)_ij(k̂) ξ^(λ)_ ini(k) , where ξ=σ and h for the vector and tensor modes, respectively. Here and hereafter weomit the time dependence of b_ K to simplify the notation. Using this expression, we obtain _±2γ^(V) =b_ K(k)/√(2)∑_λ=±1σ^(λ)sinθ_k(±λ + cosθ_k)e^± 2iϕ_k ,_±2γ^(T) = b_ K(k)/4∑_λ=±2h^(λ)[ (1+cos^2θ_k) ± 2λcosθ_k] e^±2 iϕ_k, where the superscripts (V) and (T) stand for the vector and tensor modes, respectively. The function b_ K(η,k) depends on the source of the vector and tensor modes (see Fig. <ref>).For later purposes, we mention the non-magnetic scalar mode tidal effect. We adopt the linear alignment model, in which the galaxy ellipticity is linearly related to the real space density field: γ_ij = b^ scalar_ K( k̂_ik̂_j - 1/3δ_ij) δ_ L(k) . Substituting Eq. (<ref>) into Eq. (<ref>), we have _±2γ^(S) = 1/2b_ Ksin^2θ_kδ_ L(k) e^± 2iϕ_k , for the scalar mode.Using Eqs. (<ref>), (<ref>), and (<ref>), the expressions of the E- and B-modes are given by E^(S)(k, n̂)= 1/2b^ scalar_ Ksin^2θ_kδ_ L(k) ,B^(S)(k, n̂)= 0 , E^(V)(k, n̂)= 1/√(2)b_ K(k) sinθ_kcosθ_k∑_λ=±1σ^(λ)(k) , B^(V)(k, n̂)= -i/√(2)b_ K(k) sinθ_k∑_λ=±1λσ^(λ)(k) , E^(T)(k, n̂)= 1/4b_ K(k) (1+cos^2θ_k) ∑_λ=±2h^(λ)(k) , B^(T)(k, n̂)= -i/2b_ K(k) cosθ_k∑_λ=±2λ/2 h^(λ)(k) , where the superscripts (S), (V), and (T) stand for the scalar, vector, and tensor modes, respectively. As the observable quantity, we focus on the three-dimensional power spectrum, which is defined as X(k)Y^*(k') = (2π)^3δ^3_ D(k-k') P_XY(k) , where X, Y = E or B for the E- and B- mode power spectra.From Eqs. (<ref>)–(<ref>), we finally obtain P^(S)_EE(k, μ)= 1/4(b^ scalar_ K)^2 (1-μ^2)^2P_ L(k) , P^(S)_BB(k, μ)= 0 , P^(V)_EE(k, μ) = 1/2( b_ K(k) )^2 (1-μ^2)μ^2 P_σ(k) ,P^(V)_BB(k, μ) = 1/2( b_ K(k) )^2 (1-μ^2) P_σ(k) , P^(T)_EE(k, μ) = 1/16( b_ K(k) )^2 (1+μ^2)^2 P_h(k) , P^(T)_BB(k, μ) = 1/4( b_ K(k) )^2μ^2 P_h(k) , where we define μ = cosθ_k. The linear matter power spectrum of the density field δ_ L is given by δ_ L(k)δ^*_ L(k') = (2π)^3δ^3_ D(k - k') P_ L(k) . The non-vanishing EB power spectrum appears in the chiral vector and tensor modes: P^(V)_EB(k, μ) = i/2( b_ K(k) )^2μ( 1-μ^2) χ_σ(k) P_σ(k) , P^(T)_EB(k, μ) = i/8( b_ K(k) )^2μ( 1+μ^2) χ_h(k) P_h(k) , where we define a chiral parameter χ_σ/h(k) as χ_σ(k) = P_σ^(+1)-P_σ^(-1)/P_σ(k) ,χ_h(k) = P_h^(+2)-P_h^(-2)/P_h(k) . The EB spectrum is interesting probe for testing parity-violating theories. Hereafter, we consider unpolarized cases χ_σ/h(k) = 0.We note that P_EE^(S) is the non-magnetic scalar contribution, but affects the detectability of PMFs by forming the E-mode covariance of the Fisher matrix (<ref>).Moreover, at one-loop order, the density field can source the B mode, forming P_BB^(S) (see Fig. <ref> and Appendix <ref>) and hence the B-mode covariance. We will also take them into account in the later Fisher matrix analysis. For the vector and tensor power spectra P_σ/h, we adopt (see Sec. <ref> in details) P_σ(k;B_λ, n_B) = |Π^(V)|^2 , P_h(k;B_λ, n_B) = (6R_γln( η_ν/η_B) )^2|Π^(T)|^2 , P_σ(k;r_ V, n_ V) =2π^2/k^3r_ V𝒜_ s( k/k_∗)^n_ V ,P_h(k;r_ T, n_ T) = 2π^2/k^3r_ T𝒜_ s( k/k_∗)^n_ T, for the magnetic vector, magnetic tensor, vorticity vector modes, and primordial GWs, respectively. As a demonstration, we present the lowest order multipole, the monopole, of the various models in Fig. <ref>, although the non-vanishing quadrupole and hexadecapole are also observables. We define the monopole by P_XX,0(k) = 1/2∫^1_-1 dμP_XX(k, μ) . In this plot, we set the model parameters as follows: (B_λ, n_B) = (2nG, -2.9) taken from the upper limit by Planck results <cit.>, (r_ V, n_ V)= (0.01, 0) roughly corresponding to the upper limit obtained by using the WMAP results <cit.>, and (r_ T, n_ T) = (0.03, -r_ T/8) from the Planck results <cit.>. Also, we calculate the scalar shape bias parameter b^ scalar_ K by using the fitting formula <cit.>: b^ scalar_ K = 0.09302 - 0.1289 b^ E_1/1 + 0.3541 b^ E_1 , where b^ E_1 is a linear density bias parameter. In this plot, we use the same linear density bias parameter as the HI galaxies in SKA2 <cit.>: b^ E_1 = c_4e^c_5z , with c_4 = 0.554 and c_5 = 0.783. For comparison purposes, we show the contributions from the scalar mode to the E- and B-mode power spectra. While the leading-order effect of the scalar mode results only in the E mode, the one-loop order effect produces the non-vanishing B-mode contributions (e.g., Refs. <cit.>). To compute the one-loop contribution to the B-mode spectrum, we exploit the effective-theory description of galaxy shape based on Ref. <cit.>. See Appendix <ref> for details.Recalling that all the primordial power spectrum are modeled by the power-law form, the characteristic feature observed in each power spectrum comes from the shape of each effective linear shape bias b_ K. The power spectrum of the magnetic tensor mode has a similar behavior to that of the primordial GWs as both n_B = -2.9 and n_ T = -0.00375 impose nearly scale invariance of their initial power spectra P_h and also their b_ K are exactly the same. Compared to the non-magnetic scalar power spectrum, the vector and tensor mode signals are suppressed at small scales because of the feature of the fossil effect, i.e., the absence of growth at late time.We notice that the signal of the magnetic vector mode is 3–8 orders of magnitude smaller, depending on the scale, than those by other sources. We elaborate on the origin of this suppression as follows.First, to compute each spectrum in Fig. <ref> we set the model parameters to the CMB limits; thus, the amplitude of each mode essentially reflects the amplitude of corresponding metric perturbation at around the recombination epoch. Indeed, there is a 2–4 orders of magnitude gap between the magnetic vector mode and the other threes in the metric perturbation, inducing a comparable gap in the IA. Second, recalling the behavior of the effective linear shape bias in the magnetic vector mode (see the right panel in Fig. <ref>), the effective tidal bias asymptotically approaches zero more quickly than other modes at large scales. This asymptotic behavior leads to further suppression in the magnetic vector mode at large scales. The above two facts explain the behavior illustrated in Fig. <ref>. In Fig. <ref>, we investigate the behavior of E- and B-mode power spectra of the magnetic modes by varying the model parameters B_λ and n_B. Since we plot the signal normalized by ( B_λ/ nG)^4, the solid and dashed lines overlap if the P_EE and P_BB scale as ∝ B^4_λ. We indeed see this feature for n_B≲ -1.5, while their gap increases as n_B gets larger than -1.5. This is because, for the blue tilted case, a convolution integral in the anisotropic stress (<ref>) becomes more sensitive to the ultra-violet magnetic cutoff k_ D depending on B_λ (see Eq. (<ref>)), and hence the power spectrum of the anisotropic stress no longer obeys a simple B_λ^4 scaling (see e.g. Refs. <cit.>). Corresponding to the change of the impact of k_ D at n_B ∼ -1.5, the dependence of P_EE and P_BB on n_B also changes, i.e., they decrease for small n_B but start increasing as n_B enlarges. As a consequence, they are minimized at n_B ∼ -1.5. This unique feature straightforwardly determines the dependence of the detectability of B_λ on n_B as shown in Figs. <ref> and <ref>. The overall amplitude of the tensor mode is larger than that of the vector mode due to the prefactor ( 6R_γln(η_ν/η_B))^2≈ 2× 10^4. As we will demonstrate in the next section, the contribution from the tensor mode is an important source in constraining PMFs through the observation of the IA of galaxies. § FISHER FORECASTIn this section, we discuss the constraining power of the E- and B-mode power spectra of the galaxy shape on the amplitude of PMFs based on the analytic model given in Eqs. (<ref>)–(<ref>) with Eqs. (<ref>) and (<ref>). To this end, we perform a Fisher matrix analysis. Following e.g. Ref. <cit.>, we define the Fisher matrix for the parameter vector θ as F_ij = V/(2π)^2∫^k_ max_k_ mink^2 dk∫^1_-1 dμ ×∑_a,b=EE, BB( ∂ P_a/∂θ_i) [cov^-1]_ab( ∂ P_b/∂θ_j) , where the quantity V represents the survey volume. The covariance matrix cov_ab is given by cov_ab = 2( P_a + σ^2_γ/n_ gal)^2δ_a,b . with σ_γ and n_ gal being the root-mean-square of the galaxy's ellipticity and the galaxy number density, respectively.Our analysis examines the constraints on the amplitude of PMFs, B_λ with a fixed spectral index n_B. In this case, the expression of the Fisher matrix is reduced to F(B_λ, fid) = V/2(2π)^2∫^k_ max_k_ mink^2 dk∫^1_-1 dμ ×∑_a=EE, BB( . ∂ P_a/∂ B_λ1/P_a + σ^2_γ/n_ gal|_B_λ = B_λ, fid)^2 , with B_λ, fid being the fiducial value the parameter. The size of the expected error on the PMF strength is given by σ(B_λ,fid) = √(F^-1(B_λ, fid)). PMFs whose strength exceeds the size of error are detectable at the 1σ level; thus, a minimum detectable value of the PMF strength B_λ, min is given by a solution of the equation σ(B_λ,min) = B_λ,min . Throughout the analysis, we incorporate the E-mode power spectrum induced by the scalar mode and the B-mode power spectrum induced by the vector and tensor modes and the one-loop contribution from the scalar mode into the Fisher analysis. In computing the one-loop B-mode spectrum, we follow Ref. <cit.> and use the effective-theory description of galaxy shape (see Appendix <ref> for details). We set k_ min = 2π V^-1/3, k_ max = 0.1 Mpc/h, and σ_γ = 0.3. The scalar shape bias parameter b^ scalar_ K is calculated by using the fitting formula <cit.> given in Eq. (<ref>).We first demonstrate the expected minimum detectable value for Euclid spectroscopic survey <cit.> and Square Kilometre Array (SKA) <cit.>. The galaxy redshift surveys by Euclid and SKA will, respectively, observe the Hα emitter over redshifts 0.9 to 1.8 and HI galaxies over redshifts 0.23 to 1.81.Although IA has not yet been detected for emission line galaxies (ELGs) <cit.>, Ref. <cit.> recently proposed an optimal estimator to determine the IA of halos using ELGs. We suppose that the power spectra related to the IA can be measured with the optimal estimator, and that all observed ELGs are, therefore, an ideal tracer of the halo shape.We use the survey specifications in Table 3 of Ref. <cit.> for the Euclid and Table 1 of Ref. <cit.> for SKA. We combine all the redshift bins by σ(B_λ) = 1/√(∑_i.F(B_λ)|_z=z_i) with the subscript i being the label of the redshift bins, and then numerically solve Eq. (<ref>). In Fig. <ref>, we present the minimum detectable value for Euclid and SKA (see Appendix <ref> for the same analysis but for the vorticity vector mode and primordial GWs). As expected from Fig. <ref>, the magnetic tensor mode provides stronger constraints on the PMFs strength than the magnetic vector mode. It is also apparent from Fig. <ref> that the B-mode power spectrum places stronger constraints than the E-mode power spectrum due to the absence of the sizable non-magnetic scalar contribution in the covariance matrix. We note that, since the lower wavenumber limit of the integration range in the Fisher matrix is k_ min = 2π V^-1/3≈ 0.001 h/ Mpc under the survey specification of Euclid/SKA, a unique n_B dependence of B_λ,min seen in Fig. <ref>; namely, maximized at n_B ∼ -1, is due to a unique n_B dependence of P_EE and P_BB above this scale; namely, minimized at n_B ∼ -1, (see Fig. <ref>). One can find from Fig. <ref> that PMFs with B_λ∼ 30–300 nG would be measurable by a Euclid- or SKA-level B-mode survey. To capture PMFs with B_λ =O(1 nG), what specific level of survey should be aimed at? To figure out this, we compute the minimum detectable value B_λ,min by varying the shot noise contribution to the covariance matrix σ^2_γ/n_ gal (see Appendix <ref> for the same analysis but for the vorticity vector mode and primordial GWs). We then set V = 1( Gpc/h)^3 and z=1.0. We also use the same bias parameter as the HI galaxies in SKA2 given in Eq. (<ref>). The choice of these parameters does not qualitatively change the results. Fig. <ref> describes our results, again showing that the B-mode information induced by the tensor mode is most powerful to measure B_λ. Both for the E-mode and B-mode cases, decreasing the shot noise level causes the dominance of the non-magnetic scalar signal in the covariance and hence the saturation of B_λ, min around σ^2_γ/n_ gal∼ 10^-2. We finally find that the saturated value of B_λ, min reaches 𝒪(1nG)–𝒪(10nG), depending on the spectral index. To archive this minimum detectable value, the B-mode power spectrum plays a major role and still an interesting probe even in the presence of the one-loop non-magnetic scalar contribution to the B mode.§ SUMMARYIn recent years, there has been growing attention on primordial magnetic fields (PMFs) as a strong candidate for explaining the origin of observed large-scale magnetic fields, including void regions. While the statistical properties of PMFs have been constrained by the recent cosmological observations such as the cosmic microwave background anisotropies, this paper has focused on the intrinsic alignments (IAs) of galaxies as a complementary new observational probe, aiming to delve into the nature of PMFs. The metric perturbations of the long-wavelength vector and tensor modes are known to induce the local tidal gravitational fields <cit.>.Through the observations of the intrinsic galaxy shapes, we have paved the way to detect the vector and tensor modes sourced by the anisotropic stress of PMFs in the early Universe. We have shown the relation between the anisotropic stress of PMFs and the IA of the galaxies. Considering up to the leading-order contributions, while the scalar mode only produces the cosmic shear E-mode, the vector and tensor modes produce both E- and B-modes. Hence, the B-mode signal can be a good probe to search for magnetically induced vector and tensor modes once we properly take into account the one-loop scalar contribution to the B mode. Assuming that the statistical properties of PMFs are given by a power-low type power spectrum, which includes two parameters: the amplitude of PMFs B_λ and the spectral index n_B, we demonstrated the E- and B-mode power spectra of the galaxy shape induced by PMFs. Due to the convolution and small-scale cut-off inherent in the anisotropic stress of PMFs, we found that the slopes of the E- and B-mode spectra do not change for n_B>-1.5 as n_B is increased, but only their amplitudes vary with n_B. Based on our analytical model of the E- and B-mode spectra, we have performed the Fisher analysis to estimate the minimum detectable value of the PMF strength, defined in Eq. (<ref>), for a fixed spectral index. We first examined the minimum detectable value assuming the galaxy redshift survey by Euclid and SKA. In this case, we found that the minimum detectable value of B_λ reaches about 30–300nG, depending on n_B, which is weaker than the upper limit obtained by the recent CMB observations. To investigate the detecting power of the IA observations in spectroscopic surveys, we further performed the Fisher matrix analysis by varying the shot noise term as a free parameter. We found that a minimum detectable value can reach 𝒪(1nG)–𝒪(10nG), depending on n_B, which is almost comparable to the current CMB limits, and that the B-mode spectrum still plays a crucial role in achieving this even in the presence of the non-magnetic scalar contribution to the B mode spectrum. The currently planned galaxy redshift surveys would provide weaker constraints on PMFs than the CMB observations. However, the observations of the galaxy IAs would become increasingly important as a complementary probe to understand the nature of PMFs. This paper has focused on the auto power spectra of the cosmic shear E- and B-modes induced by PMFs. However, as the anisotropic stress of PMFs also induces the density fluctuations <cit.>, we would observe a non-vanishing signal in the cross-correlation between density fields and galaxy shapes. Adding this information to the present analysis would improve the constraint on PMFs. An interesting future challenge is to probe PMFs, making comprehensive use of the available information on the galaxy density field and its shape. We have carried out our analysis with a spectroscopic survey in mind. When considering an analysis based on the two-dimensional angular power spectrum for a photometric survey (e.g. Ref. <cit.>), we expect that the impact of the shot noise on the covariance is reduced due to the larger number density of galaxies than spectroscopic surveys. We leave detailed comparisons of the detecting power on PMFs between two-dimensional angular power spectrum with three-dimensional power spectrum for an intriguing future work.SS and KA are supported by JSPS Overseas Research Fellowships. This work is supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Nos. JP23K19050 (SS), JP20H05859 (MS) and JP23K03390 (MS). KA also acknowledges support from Fostering Joint International Research (B) under Contract No. 21KK0050. TO acknowledges support from the Ministry of Science and Technology of Taiwan under Grants No. MOST 111-2112-M-001-061- and NSTC 112-2112-M-001-034- and the Career Development Award, Academia Sinica (AS-CDA-108-M02) for the period of 2019-2023. § INTRINSIC ALIGNMENTS FROM VECTOR AND TENSOR MODESIn this Appendix, based on Ref. <cit.>, we solve the equation of motion of a matter particle in a local frame in the presence of the local tidal effect, and then derive the density fields induced by the cross-talk between the long- and short-wavelength modes. From the expression for the second-order density fields, we show the linear shape bias of the IA of galaxies from the vector and tensor modes.Our starting point is the expression of the tidal field induced by the long-wavelength vector and tensor modes using conformal Fermi normal coordinate <cit.>: τ_ij(η,k_ L) = - 1/2a( a h'_ij(η,k_ L) )' , where we work in the synchronous gauge. It is useful to decompose the tidal tensor into the time-dependent part and initial perturbation part: τ_ij(η,k_ L) = 𝒯_τ(η,k_ L) ξ_ iniij(k_ L) . where ξ = σ and h for the vector and tensor modes, respectively. The function 𝒯_τ(η,k_ L) is given by 𝒯_τ(η,k_ L) = - k_ L/2a( a𝒯^(V)(η,k_ L))' , for the vector mode, and 𝒯_τ(η,k_ L) = - 1/2a( a𝒯^(T)'(η,k_ L))' , for the tensor mode. In the presence of the long-wavelength tidal tensor τ_ij, the equation of motion of a matter particle in the local frame becomes x” + ℋx'= -∇_x( ϕ_ s + 1/2τ_ijx^ix^j) ,∇^2_xϕ_ s = 4π G a^2ρ̅_ mδ , where a prime denotes a derivative with respect to the conformal time η. To solve the equation of motion, we employ the Lagrangian perturbation formalism.The Lagrangian description relates the initial Lagrangian position for the fluid element q to the Eulerian position at conformal time η through the displacement field Ψ(η,q): x(η,q) = q + Ψ(η,q) .Substituting Eq. (<ref>) into Eq. (<ref>), the equation for Ψ at the first order is given by Ψ” + ℋΨ' = -∇_q(ϕ_ s(q) + 1/2τ_ijq^iq^j) . We split the displacement field into the long- and short-wavelength mode contributions: Ψ = Ψ^(s) + Ψ^(l) . The evolution equation of each displacement field is given by Ψ^(s)''_i + ℋΨ^(s)'_i = -∂ϕ_ s(q)/∂ q_i ,Ψ^(l)'_i + ℋΨ^(l)'_i= - 𝒯_τ(η,k_ L)ξ_ iniiaq_a ,Using Poisson equation (<ref>), the solution of Eq. (<ref>) is given by Ψ^(s)_i(η,q) = - D(η) ∂^-2_q∂/∂ q_iδ^(1)(η_0,q) , where the quantity η_0 is the conformal time at the present time. The linear growth factor D(η) satisfies D”(η) + ℋD'(η) - 4π G a^2ρ̅_ m D(η) = 0 . We adopt the normalization conditions D(η_0) = 1. The solution of Eq. (<ref>) is given by Ψ^(l)_i = -β(η,k_ L) ξ_ iniiaq_a , where we defineβ(η,k_ L)=- ∫^η_0 dη'/a(η')∫^η'_0 a(η”) 𝒯_τ(η”,k_ L) dη” . Next, we solve the equation for Ψ by considering only the coupling between long- and short-wavelength modes. We start from taking the divergence of Eq. (<ref>) with respect to q: Ψ^(sl)''_a,a + ℋΨ^(sl)'_a,a = - ∇^2_xϕ_ s - Ψ^(s)_b,aτ_ab - Ψ^(l)_b,a∂_x_b∂_x_aϕ_ s , where a comma stands for the derivative with respect to the Lagrangian coordinate.We note that the first term on the right-hand side involves the coupling of short- and long-wavelength modes through the chain rule of spatial difference: ∂/∂ x_i = ( J^-1)_ji∂/∂ q_j with the Jacobian matrix J_ij = ∂ x_i/∂ q_j = δ_ij + Ψ_i,j. Then, the second order equation for Ψ^(sl) is given by Ψ^(sl)''_a,a + ℋΨ^(sl)'_a,a - 4π G a^2ρ̅_ mΨ^(sl)_a,a= 𝒯_τ(η,k_ L) D(η) ∂^-2_q∂^2δ^(1)(η_0,q)/∂ q_a∂ q_bξ_ iniab The solution of this equation is given by Ψ^(sl)_a,a =D^(sl)(η,k_ L)∂^-2_q∂^2δ^(1)(η_0)/∂ q_a∂ q_bξ_ iniab(k_ L) where the function D^sl(η,k_ L) satisfiesD^(sl)'' + ℋ D^(sl)' - 4π G a^2ρ̅_ m D^(sl) = D(η) 𝒯_τ(η, k_ L) From Eqs. (<ref>), (<ref>), and (<ref>), we obtain the Eulerian density field up to the second order: δ(x) = δ^(s)(x) + δ^(sl)(x), where we define δ^(s)(x)= -Ψ^(s)_a,a(x) ,= D(η) δ^(1)(η_0, x) ,δ^(sl)(x)=Ψ^(s)_a,ab(x)Ψ^(l)_b(x) - Ψ^(sl)_a,a(x) + Ψ^(s)_a,a(x)Ψ^(l)_b,b(x) + Ψ^(s)_a,b(x)Ψ^(l)_b,a(x) , = ξ_ iniab(k_ L) [ β(η,k_ L) x_a∂_b + ( - D^(sl)(η,k_ L)/D(η)+ β(η,k_ L) ) ∂^-2∂_a∂_b] δ^(1)(x) . We use the same ansatz in Ref. <cit.> which assumes that the second term in the square brackets in Eq. (<ref>) induces the IAs, since this term represents the growth of the density perturbation in a local region by the coupling between the long- and short-wavelength tidal fields while the first term corresponds to the displacement induced by the long-wavelength tidal field, which should have no effect on local physics. We also assume that the alignment from the vector/tensor tidal fields has the same scaling as the second order density induced by the scalar tidal fields.According to this ansatz, the expression of the intrinsic galaxy shape is given by γ_ij = b_ K(η,k_ L)ξ_ iniij, b_ K(η,k_ L)≡7/4( - D^(sl)(η,k_ L)/D(η) + β(η,k_ L) ) b^ scalar_ K . where b_ K^ scalar is the scalar linear shape bias.The factor 7/4 comes from the conversion from the second-order density field to the galaxy intrinsic shape in the scalar mode case <cit.>.§ ONE LOOP CONTRIBUTIONS TO B MODE POWER SPECTRUM Here we give a brief explanation on what is assumed to compute the one-loop correction to the B-mode auto power spectrum. Recently the perturbation theory of the IAs with the effective theory considerations has been formulated in both Eulerian and Lagrangian ways <cit.>. In this paper we employ the LPT-based calculation of the one-loop power spectrum developed in Ref. <cit.>.As the one-loop power spectrum involves the linear, quadratic, and cubic fields, we need to introduce up to the cubic shape bias parameters, which in general consists of one linear, three quadratic, two cubic free parameters to compute the one-loop correction to the shape power spectrum. However, with the comparison of the N-body halo shapes, Ref. <cit.> showed that the values of the higher order (Eulerian) shape bias parameters are well approximated by the coevolution prediction <cit.>. In other words, the halo shape field is well described by the Lagrangian tracer of the initial tidal field advected to its final position by the large-scale bulk flow. Hence the following model can be used in lieu of the full model for the one-loop power spectrum:γ_ij( k) = ∫ d^3q γ^ L_ij( q) e^i k·( q+Ψ( q)),withγ^ L_ij( q) = b^ L_ K K_ij( q)[1 + δ_g^ L( q)],where b_ K^ L is the Lagrangain linear shape bias and δ_g^ L( q) is the Lagrangian galaxy density field. Note that since the galaxy shapes are always observed with galaxies they are naturally density-weighted quantities.In order to compute the one-loop correction, we can also assume the linear bias description for the galaxy density field since the quadratic bias fields in the density in Eq. (<ref>) give rise to a reparametrization of the linear shape bias parameter. After all, with these assumptions, the free bias parameters we have to include to the one-loop calculation are the linear shape and density bias parameters: b_ K^ L and b_1^ L. The Lagrangian linear shape bias is the same as the Eulerian one, b_ K^ scalar = b_ K^ L, since the tidal field does not induce the volume distortion at first order, while the Lagrangian linear density bias is related to the Eulerian one as b_1^ E = b_1^ L + 1. Using these relations we can compute the one-loop correction to the shape power spectrum given the values of b_ K^ scalar and b_1^ E.§ FISHER FORECAST FOR THE VORTICITY VECTOR MODE AND PRIMORDIAL GWS As we are interested in the detectability of the PMFs through observations of the galaxy shape, we have focused on the vector and tensor modes induced by PMFs in the main text. For reference purposes, this appendix provides the Fisher forecast based on the same analysis as done in Sec. <ref> but we consider other vector and tensor sources: the vorticity vector mode and the primordial GWs. We define the Fisher matrix for a fixed spectral index n_V = n_T = 0 by F(r_X,fid) = V/2(2π)^2∫^k_ max_k_ mink^2 dk∫^1_-1 dμ ×∑_a=EE, BB( . ∂ P_a/∂ r_X1/P_a + σ^2_γ/n_ gal|_r_X = r_X,fid)^2 , where X=V and T for the vorticity vector mode and the primordial GWs, respectively. The size of the expected error on the amplitude of the vector/tensor modes is given by σ(r_X,fid) = √(F^-1(r_X,fid)). We here evaluate the minimum detectable value r_X, min by solving σ(r_X, min) = r_X, min . Table <ref> shows the results for the Euclid spectroscopic survey and SKA HI galaxy survey. We see from this that the E-mode (B-mode) power spectrum can capture smaller r_T (r_V) than the B-mode (E-mode) one. This can happen in surveys where the covariance is dominated by the shot noise as the Euclid and SKA. For the primordial GW case, ignoring the cosmic variance contribution to the covariance, we can analytically estimate the ratio of the Fisher matrix F^(T)_EE/F^(T)_BB by F^(T)_EE/F^(T)_BB ≈∫^1_-1 dμP^(T)_EE/∫^1_-1 dμP^(T)_BB≈(1/16∫^1_-1 dμ(1+μ^2)^2)^2/(1/4∫^1_-1 dμ μ^2)^2 = 83/63 , yielding σ_EE(r_T) ≈√(63/83) σ_BB(r_T) = 0.87σ_BB(r_T). Similarly, in the vorticity vector mode case, we have σ_EE(r_V) ≈√(21) σ_BB(r_V) = 4.6σ_BB(r_V). These values explain the results in Table <ref> very well. We note that the minimum detectable value given in Table <ref> is larger than the constraints derived in Ref. <cit.> because our analysis assumes the spectroscopic survey which has generally smaller number of galaxies than the photometric surveys assumed in Ref. <cit.>. In Fig. <ref>, we perform the same analysis as in Fig. <ref> but for the cases of the vorticity vector mode and primordial GWs. In analogy with the magnetic case in Fig. <ref>, as the shot noise decreases, the detectability of r_V/T from the E- and B-modes reaches a plateau because of the scalar-mode contamination in the covariance. In a noisy regime as 10^0( Mpc/h)^3≲σ^2_γ/n_ gal, as the above analytic estimate indicates, the E-mode spectrum can capture smaller r_T than the B-mode one.apsrev4-2 | http://arxiv.org/abs/2312.16316v1 | {
"authors": [
"Shohei Saga",
"Maresuke Shiraishi",
"Kazuyuki Akitsu",
"Teppei Okumura"
],
"categories": [
"astro-ph.CO",
"gr-qc"
],
"primary_category": "astro-ph.CO",
"published": "20231226194943",
"title": "Imprints of primordial magnetic fields on intrinsic alignments of galaxies"
} |
Appendix John Doe============emptyGenerative AI, such as image generation models and large language models, stands to provide tremendous value to end-user programmers in creative and knowledge workflows. Current research methods struggle to engage end-users in a realistic conversation that balances the actually existing capabilities of generative AI with the open-ended nature of user workflows and the many opportunities for the application of this technology. In this work-in-progress paper, we introduce participatory prompting, a method for eliciting opportunities for generative AI in end-user workflows. The participatory prompting method combines a contextual inquiry and a researcher-mediated interaction with a generative model, which helps study participants interact with a generative model without having to develop prompting strategies of their own. We discuss the ongoing development of a study whose aim will be to identify end-user programming opportunities for generative AI in data analysis workflows. § INTRODUCTION AND MOTIVATIONGenerative AI presents many opportunities for assistance and automation for end-users and end-user programmers. Our research team is interested in exploring how Large Language Model (LLM) assistance can be used in data-driven sensemaking <cit.> in spreadsheets, to identify key areas of strength and weakness in LLM assistance and identify opportunities for LLM assistance that address specific parts of the overall workflow. End-user data analysis workflows are complex and range over many steps, including problem conceptualization, identifying relevant datasets, data cleaning and structuring, developing an analysis strategy, learning how to use relevant features, open-ended exploration, and presenting results.The effect of generative AI on knowledge work has been described as a shift “from material production to critical integration” <cit.>. Critical integration consists of “deciding where in the workflow to use the productive power of AI, how to program it correctly [...], and how to process its output in order to incorporate it”. Sarkar builds on the theory of double-loop learning in organizations <cit.>, observing that there is both an inner-loop aspect to applying AI in knowledge workflows (incorporating AI assistance in various steps of existing workflows) as well as an outer-loop aspect (reconfiguring knowledge workflows to take better advantage of AI, and developing new ones which are only possible with AI). For example, in data-driven sensemaking, critical integration in the inner loop might consist of finding applications for AI in data visualization, or data cleaning. Critical integration in the outer loop might consist of applying AI towards identifying a suitable analysis strategy or automating large portions of the sensemaking workflow (e.g., in the spirit of the “automatic statistician” <cit.>) and developing new tools for human overseers focusing on auditing and quality control.A key question for researchers and designers at this point in time is how to study the needs of users involved in such workflows. We are concerned with the first phase of the design “double diamond”; we first need to design the right thing, and only later can we attend to getting the design of the thing right <cit.>. As generative AI technology is new and continuously evolving, its use in society is limited and uneven. It may not be possible, for example, to simply observe participants working with generative AI, or interview them about their work practices with generative AI, if generative AI is not widely adopted within their workflows (which is the case for the vast majority of knowledge work at the time of writing). For example, code completion in code editors for professional software developers has been an early commercialization of generative AI, which gives researchers a wide pool of experienced users with mature behaviours to study <cit.>. Other work has discussed code assistants for data analysis within computational notebooks <cit.>. Unfortunately, for our end-user scenario of interest (data analysis workflows in spreadsheets), this is not yet the case.Furthermore, it is not ideal for researchers to develop high-fidelity experiences for generative AI as a way of testing its applicability to different workflow. It is time-consuming and expensive. Moreover, it is also limiting; out of the wide variety of potential interventions at the inner and outer loops, only a very small number can be feasibly explored using a functional prototype.The traditional solution to this has been to use lower-fidelity methods such as Wizard-of-Oz <cit.>, paper prototyping <cit.> and champagne prototyping <cit.>, which allow researchers to rapidly simulate a wide variety of user experiences with significantly lower engineering costs, while also enabling interaction with experiences that may be extremely challenging or impossible to build due to technical limitations. However, these methods have limitations as well; for a Wizard-of-Oz study to have direct implications for design, the Wizard protocol must correspond to the actually existing capabilities of the system(s) that are eventually built. In particular, the mythologizing of AI's capabilities by the media, academia, and industry has led to a warped public conception of what AI can do and how it works <cit.>. Thus Siddharth et al. <cit.> urge us to focus not on this collective mirage of what AI might be, but on “actually existing AI (AEAI)”. There is a real risk that participant responses in low-fidelity studies will draw from their own biased and inflated expectations of AI capabilities to fill in the “gaps” left by the incomplete nature of the prototype. A poorly designed Wizard protocol, which allows too much improvisational deviation from a script, can exacerbate this. This problem is even greater in generic “need-finding” interviews where no prototypes are used.There is thus a need for a research method that combines the advantages of low-fidelity methods such as Wizard-of-Oz, and rapidly exploring a wide range of potential interactions at both the inner and outer loops of a knowledge workflow, while still grounding conversations with participants in the capabilities of actually existing AI. In response, we have been developing a method called participatory prompting.The participatory prompting method takes the form of a researcher-mediated interaction between a study participant and a working generative AI system. During the session, the researcher guides the participant through a workflow, seeking to test the potential applications for AI at each step. The researcher plays multiple roles in facilitating this interaction. Most importantly, they restructure user requests according to pre-identified prompting strategies, and help the user continue the interaction and recover from errors. The semi-structured interview is grounded in a specific real problem of interest to the user, drawing on principles of contextual inquiry <cit.>. The name of the method is inspired by participatory design <cit.>, and we hope that in the spirit of participatory design, the method of participatory prompting contributes to the design of AI systems that empower and enfranchise users with their involvement from the outset. The next section describes the method in detail.§ THE PARTICIPATORY PROMPTING METHOD§.§ Materials requiredChoice of system. The participatory prompting method uses a real, functional generative AI system as representative of the functionality of generative AI in general. It is therefore important to choose the system carefully and consider multiple alternatives for their suitability to the particular study.We compared the following four systems for their suitability for use in our study: OpenAI playground, OpenAI ChatGPT, Google Bard, and Microsoft . We compared them by entering some example queries that a user might have into each system, and attempting to elicit guidance and multiple stages of the data analysis process, as we were intending to do during the study. We then discussed and evaluated the comparative quality of the responses and how a participant might react to each response, with a view to choosing the system which would help produce the most insightful interactions during the study. It is worth noting that of the four systems we tested, the latter three (ChatGPT, Bard, and ) are consumer-facing products: they are built upon one or more large language models and consist of UI elements and other modules and heuristics which come together to create a coherent experience for the non-expert consumer. They can be considered to be significantly “opinionated” in a number of ways. One obvious user-facing manifestation of this is in the so-called “guardrails” which kick in whenever the conversation topic approaches an area deemed inappropriate by the system designers (such as violent or sexual content). Another example of opinionation is the turn limits imposed by : at the time of writing, a conversation with cannot exceed 15 turns (after 15 turns, the conversation is erased, and a new conversation is started).In contrast, the OpenAI playground is intended for developers to interactively test different models, and as such allows for the choice between multiple individual LLMs, and control over parameters such as temperature (most of our testing on the OpenAI playground was using the default temperature ofand themodel, a GPT-3 model which was the state of the art at the time of testing). While there are still some heuristics and guardrails in place, using the OpenAI playground is much closer to getting the “raw” output of a language model.For many studies, using a highly opinionated experience may not be ideal; the heuristics and modules used in these systems are proprietary, and researcher control and visibility into parameters such as temperature is poor. For our purposes, however, this was not a dealbreaker.For our study we have chosen to use for the primary reason that it is the only system (at the time of writing) that is designed to seek and include information from the Web as part of its responses. In our testing, we found that for many steps of the data analysis journey (ideating potential analysis paths, identifying relevant datasets, learning to use relevant features), the ability to report information from the Web resulted in much better and more actionable suggestions for users. Due to the complexity of deploying these systems at scale, during our comparative evaluation we noticed many outages, where the system was overloaded and did not respond to queries and/or displayed an error message. For a user study to go smoothly, a system that is stable and consistent is key. While we did not quantify the outages we experienced, our informal assessments of a particular system's reliability and uptime did influence our final decision. Prompt strategies. Identifying performant and consistent strategies for prompting LLMs is a well-documented challenge. In consumer-facing products, the user query is rarely sent directly to an LLM; instead it is processed and augmented with additional instructions and prompts that have been determined by the system developers. Thus, when non-experts directly interact with a “raw” LLM (e.g., via tools such as OpenAI playground), or with a generic chat application that is not tuned towards particular knowledge workflows, they may not be able to develop suitable prompting strategies to elicit good performance from the model. This is a key reason that our method involves researcher-mediated interaction, and why we do not simply study how end-users interact directly with the model.A key strength of the participatory prompting method is that researchers familiar with the design of prompting strategies can prepare these ahead of time. For our study, a group of researchers collaboratively experimented with different prompting strategies with over a period of several weeks, documenting screenshots of their interactions with and successful prompts in a shared document. A provisional list of prompting strategies which we developed through this process is given in Appendix <ref>. For example, through this process we identified that : * did not consistently use data sources from the web even if they were available, and we could bias it towards doing so by including a phrase in the prompt such as “use an online data source”, “based on publicly available information”, “with data from the web”, and “use information from the web”.* did not consistently offer citations for sources, but could be biased to do so by including a phrase in the prompt such as “prove your sources are real” or “cite your sources”.* often provided multiple suggestions for types of data analysis the user could conduct, but the answers did not support an end-user's decision for what to do next. In this case we found that adding “justify your answer” or “justify your criterion” improved the actionability of the model's responses.* is capable of rendering tables inline within the chat, which is very helpful for exploring ideas related to spreadsheet-based data analysis, but it does not consistently do so. We found that we could bias it towards generating tables by specifying “with an example”, “make an example spreadsheet”, or “make an example table”. Our method for identifying prompts is largely a pragmatic craft practice, based on trial-and-error and the intuitions of researchers. Due to the many sources of variability in LLM output, as well as variability between researchers' experience and the working examples they choose for testing different prompting strategies, our resulting prompts are subjective and difficult to reproduce. Another team, or the same team choosing different working examples, or a different model, may well have developed a different set of prompts, which will have significant downstream effects on the user study. Improving the consistency and systematicity of this step is a major challenge for user research with generative AI, as many libraries, toolkits, and even prompt marketplaces have been created to assist in this endeavour.Demographics and generative AI experience. Participants will complete a standard demographics questionnaire which includes questions about spreadsheet experience, formula experience, and programming experience <cit.>. In future participatory prompting studies, this can be replaced with another demographics questionnaire that gathers information relevant to those studies instead.Based on the model of other questions in that questionnaire, we also developed a simple questionnaire item for assessing prior experience with generative AI, as follows: “Which of the following BEST describes your experience with generative AI tools such as ChatGPT, DALL-E, Stable Diffusion, Midjourney, Google Bard, ?”In response, participants choose from the following options:* Never heard of them* Heard of them but haven't tried any* Casually tried one or more* Occasionally use one or more* Regularly use one or more As with other studies which use the aforementioned spreadsheet experience questionnaire, this item can be used in one of two ways: first, it can be used as part of the qualitative interpretation of participant interview data, to add context to their responses. Second, it can be used to group participants into rough categories of high and low prior experience (e.g., response levels 1-3 can be considered “low” experience and 4-5 can be considered “high” experience) for studying quantitative interactions between experience and any dependent variables gathered during the study (e.g., cognitive load <cit.>).Unlike spreadsheet experience or programming experience, the landscape of end-user experience with generative AI is shifting rapidly. The specific wording of this question and its response categories are thus likely to require periodic revision and updates. §.§ Main interview activityThe main phase of the participatory prompting study takes the form of a semi-structured interview run concurrently with a researcher-mediated “conversation” between the participant and the model.The interview consists of a number of “turns” consisting of 5 steps. (1) A turn begins by the participant expressing a query (e.g., asking for assistance, posing a question, asking for clarification). (2) Next, the researcher takes the user query, modifies and augments it according to the previously identified prompting strategies and sends it to the model. (3) The participant reads the model's response. (4) Next, the researcher asks the participant to reflect on the response. (5) Finally, the researcher guides the participant in continuing the conversation and choosing the next query.We wish to explore the possibility for LLM assistance in the following scenarios: * Problem conceptualization, decomposition, identifying parts of the problem that could be tackled in a spreadsheet* Identifying relevant datasets* Figuring out how to clean and structure data* Developing an analytical strategy, involving applying multiple features in sequence* Learning how to use relevant features* Exploration of alternative analyses* Presenting and communicating results The problem chosen is ideally seeded by the participant's own problem domain. This can be elicited using a question such as: “Can you share an example of a decision you had to make recently? The decision should be reasonably complex, requiring an evaluation of multiple criteria or sources.” If elicited ahead of the study (e.g., in a pre-study communication, or as part of the initial demographics questionnaire), researchers could prepare a spreadsheet and problem that is familiar to the participant’s own experience, or we can have the participant bring a shareable spreadsheet within their domain to work on. Alternatively, a suitable problem can be determined at the start of the interview. In practice we have found it is better to ask participants to think of such problems ahead of time, so that more time can be spent on the interactive portion of the interview.The problems users bring can further be divided into the following types: * A well-established spreadsheet workflow where the user is already using spreadsheets.* An open-ended problem where the user has not tried to apply spreadsheets before. From a research perspective, both types of seed problem have advantages, as they correspond respectively to the inner and outer loop of the double loop of AI assistance opportunities. In our study context, we are not interested in one or the other type in particular, or in comparing between the two, so we will not aim to control the distribution of types. However, future studies may be interested mainly in inner loop opportunities, or outer loop opportunities, or a direct comparison between them. In such cases care must be taken to ensure the seed problems used with participants are either predominantly of the type of concern, or roughly evenly distributed between the types to facilitate comparison.With a suitable seed problem, we walk through the participant's problem step by step, entering their requests into the LLM system (using pre-identified prompt strategies) and relaying their response back to the user. We find that it is useful for participants themselves to also view the screen on which the LLM interaction is taking place, as the study can progress faster when participants can read the output directly themselves.The user is then asked questions at each step such as:* Is this useful? Why or why not?* Are you confused or surprised? Why or why not? * Does it contain anything that is factually incorrect or misleading? To advance the conversation to the next turn, the experimenter may prompt the participant with a question such as * Does this give you any further ideas?* What would you like to know next, to continue your analysis?* What information is missing? Participants may also leverage suggested follow-up questions provided by the model as inspiration. However, since these suggestions may not adhere to experimenter prompting strategies, the suggestions may need intervention by the experimenter to align them. §.§.§ Post-activity interviewAfter the turn-taking phase of the study, the participant is interviewed and asked to reflect on the experience. For instance, they could be asked how such a tool would fit into their workflow, what features they feel would improve the experience, and what were the strengths and weaknesses of the new modes of working enabled with generative AI.This phase may additionally involve eliciting participants' responses to mock-ups of potential interface designs in a design probe. Importantly, because the participant has just had the experience of interacting with an actually existing AI system, they are more likely to have an accurate understanding of what a different interface design might actually achieve for their workflow in terms of usability. The participant's grounding in actual AI capabilities improves the validity of research insights over simply interviewing participants about their response to mock-ups. If the participant reported that they had experience with generative AI, experimenters could elicit the participant to compare their previous workflows with what they experienced with participatory prompting. This might include the differences in solving similar or the same problems they saw in the study, or even understanding how the participant might change their own prompting strategies after the study.The structure of and questions asked during the post-activity interview depends on the aims of the participatory prompting study. In our situation, we are interested in how generative AI can help non-expert end-users in data analysis workflows, particularly within spreadsheets, and so we selected our questions accordingly. Our full final script (incorporating revisions made after a pilot study detailed in Section <ref>) is given in Appendix <ref>. §.§ PilotThe first version of this study protocol was piloted on a convenience sample of two participants who are familiar with spreadsheets and who use spreadsheets for their work. Each pilot took approximately 1 hour, as intended.The pilots resulted in the following observations and adaptations:It can be difficult for participants to settle on a suitable seed problem that is complex enough that it requires generative AI (as opposed to a traditional web search) to solve, but simple enough that the required context can be described to the system using a few sentences or a paragraph at most. We introduced more questions in the problem elicitation phase of the study that the experimenter could use to help the participant (e.g., “Can you share an example of a problem that required you to develop a new workflow?”). However, our recommendation is that if possible, the participant should be asked to think of a suitable problem ahead of the scheduled study session, to maximize the time available to engage with the problem in the turn taking phase. We also noticed that participants were not familiar with some jargon in our questions (e.g., “data-driven decision-making”) and we modified our questions to elaborate and clarify these terms.We noticed that the participants were able to go through 5-6 turns in the time allotted. This may seem like a small number of turns, but it nonetheless produced a wide range of qualitative insights. The turn-taking phase can be elongated in future studies if this is felt to be necessary, study duration targets and participant fatigue notwithstanding.The most time-consuming aspect of each turn is in the reflection step, where the participant is asked to reflect on the system's response, and the advancement step, where the researcher guides the participant to decide what to do next. This observation helped us decide on setting to “creative” mode for the study. has a single user-facing setting. The user can choose between creative mode (described by the UI as “original and imaginative”), balanced mode (“informative and friendly”), and precise mode (“concise and straightforward”). We initially used “precise” mode because we believed that it would be the least likely to hallucinate misinformation, and because generating short responses would allow the user to read through them more quickly and therefore enable more turns overall. Since the number of turns is largely dominated by the time spent on the reflection and advancement phases, the small time advantage gained in precise mode by having to read less text did not accumulate to allow an increased number of turns overall. Moreover, in practice we observed that “creative” mode was no more likely to generate hallucinations, and since it was far more verbose, often emitting several paragraphs in response, it improved participants' reflections (by giving them more to reflect on) and the ease with which a suitable next query was selected.We noticed that if the model's response is completely generic or not useful, especially at an early stage of the conversation, our pilot participants were not motivated to continue the interaction. In response to this, we introduced a number of different advancement-oriented questions the researcher could use to help suggest a way forward, such as: “What would you change in your query to make this more useful? Would you ask this a different way?”.We noticed participants' preconceived notions about the system's capabilities were heavily influenced by their prior experience with search engines, and initially thought to use short queries of the kind used with search engines. This is unsurprising given 's positioning within a more traditional search interface. Previous studies have also noted that participants' use of generative AI systems is influenced by their experience with search engines <cit.>. However, such short queries cannot adequately capture the user's context and intent. We introduced a guidance statement in the protocol whereby the experimenter explains that the generative AI system permits longer and more conversational interaction.We also introduced a couple of strategies for the experimenter to gently suggest a way to continue the conversation, if the participant was having difficulties ideating a next step. These include the experimenter directly suggesting an action (e.g. “Let's see what happens if we try <some query>”) but also suggesting an action as a baseline to help the participant conceive a contrastive alternative (e.g., “I propose to continue by <some query>, but what would you have done instead?”). However, it is important that the experimenter's suggestions do not bias or significantly change the course of the interaction, and serve mainly to unblock the participant. Much as with regular interviewing, the ability to elicit rich responses from the participant without introducing bias depends on the skill of the interviewer. To help guard against such bias, we recommend in the protocol that these experimenter-led strategies should not be employed in consecutive turns (i.e., if the experimenter led the query in the previous turn, they should not do so in the current turn).There were issues with understanding participant expectations of model output, where even after working with the experimenter to craft a prompt, the participant did not know they would need to specifically request images. This led to needing to re-prompt the model to obtain the desired result. To prevent needing to do multiple prompts to obtain desired output, which can take up study time, experimenters should inquire on the expected results, including data types, from the participant to close this gap. Understanding what types of data could be useful for the participant, and explicitly requesting them in the prompt, is a necessary strategy.Participants noticed that content the model had previously shown in the output could be missing in successive outputs. When the motivation of the participant was to build upon previous results, they wanted to make sure the data was consistent throughout the conversation with . Therefore, prompts crafted for the study need to include or refer to previous output in an attempt to have the model consider this data for further prompts.One participant was suspicious of the data the model produced as a column in a table and wanted to verify this data by going to the websites the model referenced. This is a third workflow separate from prompting and spreadsheeting that requires a tangent into navigating to the source and verifying the data. This is a realistic strategy for users of chat based LLMs, but it is removed from prompting interactions. While this is not explicitly part of the protocol, we will allow participants the freedom to explore and verify the results returned by the model if desired. In the post-activity interview, we noticed participants speculating on multiple occasions that “if it could do <some action>, that would be helpful”. Since these types of questions can actually be put to the system to test whether it can do it, we amended the protocol to permit the researcher to spot-test such participant speculations and get feedback from the participant. We also introduced the following question to specifically elicit perceived barriers to sensemaking with AI assistance: “What barriers or frustrations did you have with this experience that prevented you from exploring the question to your satisfaction?”Our full revised script after the pilot is given in Appendix <ref>. §.§ Effectiveness of protocol during pilot While we do not claim these are usable findings due to currently running a pilot of n=2 and the protocol was adapted live during these pilot runs, we believe there is evidence that this protocol was effective at revealing valuable insights from participants. These include:* After a few prompts, a participant noted they would start a spreadsheet to maintain the data they were receiving from the model, but thought it would be difficult to switch between the spreadsheet and further prompting. Getting the model to generate a table to help the participant visualize a future spreadsheet was helpful in this situation.* A participant was unsure the model considered the entire context of the prompt it was given, even when this context was in the prompt, and felt there was no way to verify this with the model.* Upon noticing a result of potentially summarized or hallucinated data was given by the model, a participant noted that if they could not trust the results and had to manually search to verify the data in the table. They said they felt it severely limited the benefits of . We believe this protocol is an improved adaptation of the traditional Wizard-of-Oz approach for studies of generative AI, since participants interact with a real AI model, but the implementation costs were extremely low. § DISCUSSION AND LIMITATIONSThe participatory prompting approach detailed in this paper raises the question of whether some activities carried out by the human experimenter could be supported with AI. One question that remains to be answered is what affordances human prompt strategies have over an AI that is focused in helping the participant best interact with the generative AI.However, such a protocol might increase the turn time or number of turns taken as the participant has to interact with this new AI prompting assistant while also performing their sensemaking task. Human-driven participatory prompting also allows the experimenter to ask user experience questions and inquire on participant motivations, which can provide valuable insights for researchers but may not be valuable for the actual prompting and might not be asked by an AI assistant.One clear extension of this protocol is for the experimenter to also draw upon the library of existing AI plugins and recommend useful experiences that help the participant solve their problem. This could be directly compared to the effectiveness of the existing plugin experience where the model chooses which plugin to use, assuming the user has the installed plugins.One limitation of this protocol is that because the experimenters will take a turn at helping craft prompts with the participant, results following this protocol may not give a clear understanding of where and when a user's unsupported prompting would have had issues. We attempt to address this limitation by having the participant reflect on how they would have modified a prompt (see Appendix <ref>). Similarly, because we are interested in how users might perform sensemaking in spreadsheets by leveraging AI, we have created an environment and crafted prompts that emphasize the use of spreadsheets and organized data. This might mean that the choice to move from prompting to spreadsheeting may not be as organic as it would if the user interacted with the model without experimenter assistance. There is a multitude of data analysis experiences that might also be useful for users (e.g., OpenAI's Code Interpreter, which performs data analysis tasks with Python code <cit.>). The freedom to choose from available interactions would provide useful insights for user needs for end-user data analysis and sensemaking tasks. Some interactions were limited by the inability to re-trigger generation of a response with respect to a specific query within the conversation in (re-generation and editing a query is possible for instance, within ChatGPT and the OpenAI playground; this enables a kind of flexibility and fluidity that is akin to being able to independently edit and run different code cells out-of-order in a Jupyter notebook). For instance, if the participant changed their mind about a query, or if the system stopped generating text partway through a response, which worried one participant about continuing to prompt the model. In the only option is to submit a follow-up query within the same conversation, but which will include the undesired or incomplete previous queries and responses as part of the “context”. The alternative is to start a fresh conversation and then laboriously “replay” the conversation, building up the same conversational state (or more likely, a similar state, since the system's responses are nondeterministic) through the same series of prompts until you arrive at the point in the conversation at which you wish to try a different query. Neither of these options is practical or predictable in a time-limited study.§ CONCLUSION In this work-in-progress paper we have presented the ongoing development of participatory prompting: a lightweight user research method for eliciting opportunities for AI assistance in knowledge workflows. It uses a real, functional generative AI system, thus improving on traditional Wizard-of-Oz or paper prototyping, where the user interaction can become unmoored from the technical reality of these systems. On the other hand, it allows researchers to use an “off-the-shelf” AI model with no additional engineering costs for fine-tuning, customization, or UI development, enabling rapid and broad-ranging testing of user experiences.We reported a pilot study (n=2) in which we tested the participatory prompting protocol. The pilots have resulted in improvements to the protocol, changes to the script, and reflections on how to get the most insight out of a participatory prompting session. The pilots have validated the feasibility of the protocol as a method for understanding the user experience of generative AI in knowledge workflows. In future work, we are planning to run a full-scale participatory prompting study to elicit opportunities for AI assistance in the data analysis workflows of end-user programmers in spreadsheets. apacite § STUDY SCRIPT§.§ Materials/activities pre-interviewAsk to prepare spreadsheet / reflect on data-driven workflows. §.§ [5 minutes] OpeningIntroductions and pleasantries, consent form, demographics form. §.§ [10 minutes] Discussion of current data decision practices.* Can you briefly describe your role?* Can you describe, with examples, what kinds of data-driven decision making you do as part of your role?* Can you describe, with examples, what tools you use?* Can you describe, with examples, how you approach an unfamiliar data-driven decision making problem? An unfamiliar problem where you had to make a decision based on some data. This could be tabular data, lists, or text, etc.* Can you describe an unfamiliar data decision problem, potentially fictional, you may encounter in the future? If this produces a satisfactory scenario, proceed to turn taking, else ask: Can you share an example of a problem that required you to develop a new workflow? §.§ [30 minutes] Participatory prompting, turn taking Per turn: * Is this useful? Why or why not?* Are you confused, surprised, or indirectly inspired? To progress, choose one or more of: * What would you like to know next? What else would you need to know to follow these suggestions?* What would you change in your query to make this more useful? Would you ask this a different way?* (Experimenter driven, at most once in a row) Let’s see what happens if we try (X). Alternatively: I propose to continue by X, what would you have done (e.g., continued by Y, different task, abandon tool)?* (If issue with result) I see that there is an issue here with (X), if we do this (new prompt) we can get that data back for you (if participant wants this, continue, else 1st question).* (If participant is stuck, only thinking in terms of “classical” search engines) Imagine you’re talking to a colleague, and bouncing ideas off them.§.§ [15 minutes] Post activity interviewHow would a tool like this fit or not fit into your workflow? If the participant says something like “If it could do X that would be helpful”, try it out, get feedback from the participant, but keep time in mind.* What benefits does this hybrid spreadsheet-chat workflow provide over your existing workflow?* When you were surprised/inspired by X (from turn taking), what features/capabilities would be useful in exploring this inspiration further?* What features do you believe would increase the frequency and effectiveness of these inspiring results/moments (e.g., visualizations, videos, suggested prompts)?* How do you audit data/decisions now and how would that change with these AI-powered features?* How would your decision making workflow change with a tool like this? * What barriers or frustrations did you have with this experience that prevented you from exploring the question to your satisfaction?* What are the advantages or disadvantages of using a chat-based interface? § PRE-IDENTIFIED PROMPTSThis section lists prompts that we have determined through trial and error for use during the study. * Problem conceptualization, decomposition, identifying parts of the problem that could be tackled in a spreadsheet * <Description of user problem>. Explain how to use a spreadsheet for this with an example. * Explain a different way to use a spreadsheet for this with an example. * I am trying to make a data-driven decision about <X>. Is this a good problem to use data tools such as spreadsheets to solve? Explain why or why not. What sub-problems or related problems are good candidates for spreadsheet solutions? Justify your answer. * Identifying relevant datasets * What data is relevant to this problem. List sources. * Use an online data source to add a useful column to this table. Prove your sources are real. * Add a column to the table containing a score representing <X>. Invent a criterion for this score based on publicly available information. Justify your criterion. * Add more rows and columns to the table based on information you consider relevant to the decision of <X>. * Add columns to the table with data from the web such as <X>. Cite your sources. * Use information from the web to populate the spreadsheet with more accurate figures. Cite your sources. * Make an example spreadsheet according to your suggestions above. Use information from the web to populate the spreadsheet with accurate information. Cite your sources. * Figuring out how to clean and structure data * Explain how to put this data in a spreadsheet with an example. * Developing an analytical strategy, involving application of multiple features in multiple steps * Explain how to <user problem> in Excel with steps. * Explain another way to <user problem> in Excel with steps. * It is not possible to <suggestion>. Explain an alternative method with steps. * What spreadsheet features can I use, such as charts, formulas, conditional formatting, pivot tables, etc. Show examples. * <Model suggestion> Explain how to do this with an example. * Learning how to use relevant features * Explain how to use <feature> to solve this problem in Excel with an example. * Exploration of alternative analyses * Presenting and communicating resultsOthers (non-categorised) * Make a spreadsheet example * <Model mistake> is not correct. Provide an alternative answer and prove that your answer is correct.* Sometimes will refuse to make a spreadsheet. Try asking for a `table' instead. Or ask repeatedly. | http://arxiv.org/abs/2312.16633v1 | {
"authors": [
"Advait Sarkar",
"Ian Drosos",
"Rob Deline",
"Andrew D. Gordon",
"Carina Negreanu",
"Sean Rintel",
"Jack Williams",
"Benjamin Zorn"
],
"categories": [
"cs.HC"
],
"primary_category": "cs.HC",
"published": "20231227163722",
"title": "Participatory prompting: a user-centric research method for eliciting AI assistance opportunities in knowledge workflows"
} |
Article Title] Camera Calibration for the Surround-View System: A Benchmark and Dataset]Leidong [email protected][]Chunyu Lin*[email protected]]Shujun [email protected]]Shangrong [email protected]]Yao [email protected] These authors contributed equally to this work. Institute of Information Science, Beijing Jiaotong University, Beijing Key Laboratory of Advanced Information Science and Network, Beijing, 100044, China[2]Department, Organization, Street, City, 10587, State, Country[3]Department, Organization, Street, City, 610101, State, Country Surround-view system (SVS) is widely used in the Advanced Driver Assistance System (ADAS). SVS uses four fisheye lenses to monitor real-time scenes around the vehicle. However, accurate intrinsic and extrinsic parameter estimation is required for the proper functioning of the system. At present, the intrinsic calibration can be pipeline by utilizing checkerboard algorithm, while extrinsic calibration is still immature. Therefore, we proposed a specific calibration pipeline to estimate extrinsic parameters robustly. This scheme takes a driving sequence of four cameras as input. It firstly utilizes lane line to roughly estimate each camera pose. Considering the environmental condition differences in each camera, we separately select strategies from two methods to accurately estimate the extrinsic parameters. To achieve accurate estimates for both front and rear camera, we proposed a method that mutually iterating line detection and pose estimation. As for bilateral camera, we iteratively adjust the camera pose and position by minimizing texture and edge error between ground projections of adjacent cameras. After estimating the extrinsic parameters, the surround-view image can be synthesized by homography-based transformation. The proposed pipeline can robustly estimate the four SVS camera extrinsic parameters in real driving environments. In addition, to evaluate the proposed scheme, we build a surround-view fisheye dataset, which contains 40 videos with 32,000 frames, acquired from different real traffic scenarios. All the frames in each video are manually labeled with lane annotation, with its GT extrinsic parameters. Moreover, this surround-view dataset could be used by other researchers to evaluate their performance. The dataset will be available soon.[ [=====§ INTRODUCTION Surround-view is increasingly popular in advanced driver assistance system (ADAS) <cit.><cit.> A four-camera SVS system is shown in Fig.1. The Surround-view generates a360-degree image around the vehicle, providing the driver with a comprehensive view of the environment without any blind spots. In addition, the Surround-view system is widely used in various automatic driving computer vision tasks, including traffic sign recognition parking space detection <cit.><cit.><cit.>. Typically, the onboard cameras are installed at the front, rear, left, and right sides of the vehicle,near the license plate. The images captured by onboard cameras are then used to generate auxiliary views of the vehicle environment.Achieving seamless stitching of all captured images requires accurate calibration of the multi-camera system. Inaccurate calibration parameters can lead to a false perception of surroundings, which is dangerous for vehicle control. While intrinsic calibration techniques are mature, extrinsic parameters are varied due to the tiny camera motion. As the vehicle moves, the camera can slowly accumulate extrinsic changes from vibrations such as engine vibration, door opening and closing, extreme wind, and road bumps. Therefore, the pose and position of the onboard camera need to be re-estimated to ensure stitching performance. In most commercial solutions, drivers have to rely on professional factories or workers for extrinsic calibration, which can be time-consuming and labor-intensive.Manufacturers have a demand for effective extrinsic calibration without human intervention. In recent years, there are some self-calibration schemes for various realistic scenarios <cit.><cit.><cit.><cit.>. However, the dataset of traffic scenarios with surround-view is still insufficient. To address this research gap, we collect a new dataset that consists of traffic scenes with lane lines and proposed a self-calibration pipeline. In summary, our contributions mainly consist of three aspects:1) An extrinsic calibration scheme is proposed, which utilizes lane line and ground texture. First, our method detects and filters lane lines nearby the vehicle. Based on geometric constraints, we makes a rough estimation for camera pose by utilizing the direction of lane lines. Second, we align the ground texture of adjacent cameras to achieve accurate extrinsic estimation. 2) Our method corrects pose through mutual iterating of lane line projection constraints and lane re-detection. Rough lane detection results in inaccurate pose estimation.By projecting the frame to the rough ground plane with the rough pose, lane marking can be relocated more accurately. As a result, the pose can be iterated with updated lane marking. Our method is robust and can maintain subpixel accuracy for long-range lane marking detection. 3) We propose a surround-view video dataset with ground truth (GT). The dataset is collected from various traffic scenarios with lane lines in different environmental conditions. It contains 40 sets of videos collected by fisheye cameras. All video frames are manually annotated with high-quality lane points and camera extrinsic parameters.The proposed method requires following assumptions as prerequisites:1) Intrinsic parameters and the position of the camera are already known.2) The road surface is flat and straight, besides the driving direction is always parallel to the lane line.The angle between the front direction of the vehicle and the lane lines is supposed to be from -2 to 2 degree. In our calibration pipeline, the range of roll and yaw angle between front direction of the vehicle and the lane lines were supposed to be from -10 to 10 degrees. The range of pitch angle is from 20 to 90 degrees. For each camera, The Intrinsic parameters (including distortion coefficients) are required, and the error of the x-coordinate should be within -0.2 and +0.2 meters. The error of the z-coordinate (the height of camera) should be within -0.2 and +0.2 meters.SVS system is usually consist of four or six fisheye cameras. Our paper is aimed at the four-camera system. The proposed method is fully automatic. In traffic scenarios, it can maintain stable estimation under various environmental conditions.§ RELATED WORKS§.§ Surround-view datasets The Woodscape dataset<cit.> is the first fisheye dataset that includes about 10,000 images with instance-level semantic annotation. It was collected from the street road and parking lot. Notably, the images from the dataset are not continuous frame by frame and are selected periodically. The entire dataset were collected by the same vehicle, resulting in only one set of extrinsic parameters.The Tongji Surround-view dataset <cit.> is a large-scale campusscene collected by an electric car equipped with four cameras. It mainly composed of two parts: calibration site images and natural scenes.The original dataset contains 19,078 groups of fisheye images with only one set of extrinsic parameters. §.§ Lane detection Lane markings are general elements in traffic scenes. Utilizing the geometric constraints, pose can be estimated by a pair of lane marking in frames.Early lane detection methods were primarily based on hand-crafted features, such as color <cit.><cit.>, edge <cit.><cit.>, and texture <cit.>. Recently, the methods of deep learning <cit.><cit.> have significantly improved lane detection performance. In VPGNet <cit.>, vanishing points were utilized to conduct multi-task network training for lane detection. SCNN <cit.> specifically considers the thin-and-long shape of lanes by passing messages between adjacent rows and columns at a feature layer. SAD <cit.> and inter-region affinity KD <cit.> further adopt the knowledge distillation to improve lane detection. PolyLaneNet <cit.> formulates the instance-level lane detection as a polynomial regression problem, and UFSA <cit.> provides ultra-fast lane detection by dividing the image into grids and scanning grids for lane locations.Due to the limitation of computation performance, on-board processors cannot apply to deep leaning tasks. To achieve a more accurately extrinsic calibration, we propose a method that provide subpixel lane detection in the middle distance. §.§ Pattern-based calibration methods A pattern-based approach estimates camera parameters using special patterns, including corners, circles, or lines. Since a pattern-based approach uses precisely drawn patterns with known configurations, it is possible to accurately estimate the camera parameters, makingit suitable for accurate calibration for a surround-view camera system. Some pattern-based calibration methods place calibration patterns in the overlapped fields of the cameras <cit.><cit.>. Methods in <cit.> use factorization-based methods by placing calibration patterns between adjacent groups of cameras. However, these calibration method requires a checkerboard, making them suitable for the factory but not for a private use, as they involve lots of human interventions. §.§ Self-calibration methods In <cit.>, Zhao et al. first detected multiple vanishing points of lane markings on the road via the weighted least squares method. With the estimated vanishing points, the pose of the multi-camera system relative to the world coordinate system was solved. Choi et al. <cit.> also designed a lane-line based extrinsic self-calibration pipeline for the surround-view case, in which the SVS was calibrated by aligning lane markings across images of adjacent cameras. However, this method relies on high accuracy of lane marking detection and world coordinate of cameras. There are some self-calibration schemes that are applicable to the SVS, including Liu et al.’s method <cit.> and Zhang et al.’s <cit.><cit.>. They deeply dissected the online extrinsic correction problem and offered effective solutions. Liu et al. <cit.> proposed two models, the “Ground Model” and the “Ground-Camera Model”, which correct extrinsic by minimizing photometric errors with the steepest descent <cit.>. Zhang et al.<cit.> designed a novel model, the bi-camera model, to construct the least-square errors <cit.> on the imaging planes of two adjacent cameras and then optimize camera poses by the LM (Levenberg-Marquardt) algorithm <cit.>. And they further improved their work in <cit.> by utilizing multiple frames selected in a local window rather than a single frame to build the overall error, thus improving the system’s robustness. However, the above three studies <cit.><cit.><cit.> focused on online correction rather than calibration, and required a rough initial extrinsic as the input, whichlimited the application.§ PROPOSED METHOD Figure. 2 shows the flowchart of the proposed method. First, the method detects lane markings from images captured by four cameras. After locating the lane markings, the method calibrates the extrinsic parameter the of the front and rear cameras. Then it processes the left and right cameras using the estimatedof the front and rear cameras. In the followingsection, further details of each step will be introduced.§.§ Lane detection and selection We utilize Kannala-Brandt’s distortion model to undistort fisheye images and Canny’s edge to detect lines. The proposed method employs Hough line detection on the undistorted image to detect lane marking. Since the detection results may include falsely detected lane markings, a series of measures are taken to filter the required lane markings. First, we used a vanishing point (VP) to reject the outer lines. The VP can be calculated by <cit.>. The Lane marking whose perpendicular distance to the vanishing point is less than a specified threshold are preserved. The overall aim is to reject the lines that are far away from the vanishing point.Considering the left and right cameras cannot capture the lane markings below the vehicle, a RoI (region of interest) is set to reject such markings. Since the front and rear cameras required only one pair of lane marking, the pair of lane marking nearest to the camera is selected for estimation. As for the left and right cameras, it only requires one lane marking. §.§ Front and rear cameras Calibration The proposed method uses geometric constraints of lane lines in the projection plane to iteratively estimate camera pose. We propose an algorithm to iteratively estimate camera pose. The algorithm consisted of three sub-step and each sub-step separately estimate pitch, yaw and roll angles. Assuming P' is the point in the undistorted image and the corresponding point in the surround-view image is P, P can be transformed by the homography H as follows, P^'= H^-TP =( K_2RK_1^-1 )^-TP , where K_1 is a 3x3 matrix representing the real camera’s intrinsic parametersand K_2 is a 3x3 matrix representing the intrinsic parameters of the virtual camera on the surround-view plane. R is the rotation matrix. I^’ is the point on the surround-view plane.We design three kinds of cost function to estimate pitch, roll and yaw angle separately. Figure 3 illustrates their functionality. Our algorithm first minimizes the cost c_φ by updating rotation matrix iteratively with, R_1^' =R_φ R=[ 1 0 0; 0cosφsinφ; 0 -sinφcosφ ]R, where φ is pitch angle.The pitch angle is estimated by maximizing the cost c_φ as follows, c_φ = = a_1b_1 + a_2b_2 +a_3b_3 + a_4b_4 , wherea_ib_i = a_ib_i / a_ib_i (i=1,2,3,4).As Figure 3 shown, point a_iand b_i(i=1,2,3,4) correspond to the end points of the extracted markings in the surround-view image. a_1,a_2,a_3,a_4 are the intersection point with line y=0. a_ib_i (i=1,2,3,4) is the unitized direction vector of a_ib_i, i.e., the direction vector of the lane marking. In this sub-step, we can approach the unique extreme value ofc_φ by iterating calculating rotation matrix with equation (2).In the second sub-step, the iterative strategy is updating rotation matrix with,R_2^' =R_ω R=[cosω 0 -sinω; 0 1 0;sinω 0cosω ]R, Where ω is yaw angle. In most cases, the width of the right lane line is equal to the left lane line. As a result, the yaw angle is estimated by minimizing the costc_ω by, c_ω = | |a_1a_2 |-|a_3a_4 ||. Where |a_1a_2 | corresponds to length of a_1a_2 and |a_3a_4 |corresponds to length of a_3a_4. Similarly, we can approach the only extreme value ofc_ω by iterating the rotation matrix with equation (4).In the third sub-step, the iterative strategy can be expressed as,R_3^' =RR_θ =R[cosθ -sinθ 0;sinθcosθ 0; 0 0 1 ], where θ is roll angle. Since the direction of the vehicle parallels to the lane marking, the projection line of the lane marking also parallels to the axis of the projection plane. The roll angle is estimated by minimizing the cost, c_θ as, c_θ =∑_ i=1 ^ n | cos⟨a_ib_i ,e_1⟩ |, wheree_1 is the vector equals [1, 0] in the surround-view plane and n equals 4. Similarly, we can approach the only extreme value ofc_θ by iterating the rotation matrix with equation (6).Overall, the motivations for each of the "cost functions" is not well described Eventually, it becomes clear that the first makes all the lines "point the same direction", the second normalizes the line widths, and the third makes the lines point "up" in the image. The total algorithm is showed in Algorithm 1.In the sub-step, phi, omega, and theta will be updated by determining the gradient direction of cost function. Ignoring the error of line detection, the first derivative in c_φof φ is monotonic, so as to the derivative in c_ωof ω, derivative in c_θof θ. Each sub-step iteratively approaches the extreme value of the cost functions and camera pose are able to be gradually estimated.Based on these cost functions, camera pose can be roughly estimated. This method can achieve an accurate camera pose estimation under precise lane marking detection. Considering the complex traffic scenario in reality, the line detection needs to be further refined in the subsequent step.§.§ Lane marking refinement The accuracy of lane marking detection is crucial for calibration. For the left and right cameras, since lane marking is quite close to the vehicle, the detection method mentioned in section.3.1 is sufficient to meet the accuracy requirements. However,environmental conditions, such as extreme light and fragmentary lane marking, the above detection method yields unsatisfactory result for the front and rear cameras. As shown in Figure 4(b), the method is unstable in edge detection when lane marking is far away from the camera. This problem can be solved under the projection plane. After the rough calibration based on the lane line cost function, the preliminary estimated pose is utilized to generate a projection image based on homography. As illustrated in Figure 4(c), in the projection image, lane marking is accurately estimated by line detection, such as Hough line detection. The originally selected lane line is updated by searching for the nearest detected line. After correcting lane marking, the front and rear cameras are re-calibrated base on the lane marking cost function. §.§Co-refinement of front and rear camera As shown in Figure 5, after the pose of front and rear cameras are respectively estimated, the x coordinate of the front and rear cameras in the world coordinate system can be optimized.As shown in Figure 6, in order to robustly generate the surround-view images, camera position error and pose error need to be taken into account. Therefore, the front and rear cameras will be simultaneously refined by minimizing the cost, c_fb, as c_fb =c_ω,front + c_ω,behind +∑_i=1^n | x_fi-x_bi | , where x_fi , x_bi is x coordinate of a_fi and a_bi (i=1,2,3,4). The cost function c_fb reflected the relative aligned error of front and rear camera. Therefore, we only optimizing the x coordinate of rear camera in the world coordinate system.§.§Calibrate left and right camerasThe calibration of the left and right cameras is similar to that of the front and rear cameras. As Figure 7 shown, the method firstly estimates the yaw (Figure 7, (a) to (b)) and roll angles (Figure 7, (b) to (c)) of the side camera. The yaw and roll angles of the side camera are similarly estimated by themethod mentioned in Section 3.1. As Figure 7 shown, point a_s1, a_s2, b_s1, b_s2 respectively correspond to the end points of the extracted markings in the surround-view image, i.e., the intersection point with line y=0 and line y=y_0.The yaw angle is estimated by minimizing the costc_ω,side by, c_ω,side = | |a_s1a_s2 |-|b_s1b_s2 ||, Where |a_s1a_s2 | corresponds to length of a_s1 a_s2 and |b_s1b_s2 | corresponds to length of b_s1b_s2. Similarly, we can approach the only extreme value ofc_ω,sideby iterating the rotation matrix with equation (4).The roll angle is estimated by maximizingthe cost, c_θ,side as, c_θ,side =cos⟨a_s1b_s1 ,e_1⟩ + cos⟨a_s2b_s2 ,e_1⟩ , Where a_s1b_s1, a_s2b_s2 respectively correspond to the unitized direction vector of a_s1b_s1 and a_s2b_s2, i.e., the direction vector of the lane marking. e_1 is the vector equals [1, 0] in the surround-view plane and. Similarly, we can approach the only extreme value ofc_θ,side by iterating the rotation matrix with equation (6).The optimization of pitch angle relies on the calibration results of the front and rear cameras. once the estimation of front and rear camera is completed, the width of lane marking can be simply calculated. The pitch angle is estimated by minimizing the costc_φ,side by, c_φ,side = | |a_s1a_s2 |- d_l |, Where |a_s1a_s2 | corresponds to length of a_s1a_s2 and d_l corresponds to the width of lane marking calculated in previous the section. Similarly, we can approach the only extreme value ofc_(φ,side) by iterating the rotation matrix with equation (2).However, due to the limited lane width accuracy of front and rear calibration, pitch estimation is unstable. This represents a limitation of calibration based on lane marking. As a result, a texture-based method is proposed to robustly estimate the pitch of the side camera. §.§Co-refinement of left and right cameraIn this step, the pipeline estimates pitch and position by contracting the pixel color error of RoI in the projection plane. Figure 8provides an example of how adjacent cameras are processed by RoI texture. The texture alignment error considersthe color difference and edge gradient of pixels. Before calculating the pixel color error, the projected images of adjacent cameras need to be normalized on three color channels, which reduces the impact of adjacent cameras under different lighting conditions. Assuming that the boxed projection map of two adjacent cameras is I_2and I_1. Normalization from I_2to I_1 is, I_2^' = σ_1 /σ_2(I_2 - μ_2) + μ_1 , where σ_1, μ_1 are standard deviation and mean value of I_1. σ_2, μ_2 are standard deviation and mean value of I_2.The pitch angle and position can be estimated by minimizing the cost, c_text as, c_text= 1/hw∑_i=1^h∑_j=1^w G_y(i,j)(I_1(i,j)-I_2^'(i,j))^2 , where h is the height of the RoI image. w is the width of the image, G_y is the longitudinal edge gradient of the pixel, G_y is calculated by Sobel operator. Considering the characteristics of the fisheye lens, the clarity of projection images of two adjacent cameras is different. Therefore, G_y is calculated by the camera whose RoI is closer to its principal point. For our dataset, we select left and right camera to calculate G_y. Due to the rough estimation of the previous step, the traversal range has been reduced. The pitch angle and position can be approached by traversal algorithm and the time cost is acceptable. §.§Parameter summarizing in sequence Sections 3.1-3.6 introduce the calibration at a certain moment. To apply the calibration to a long image sequence, the camera parameters are repeatedly estimated whenever a sufficient number of lane markings are collected. Using this approach, multiple parameter sets are obtained. A parameter set includes 12 camera angles (three angles for each camera) and can be represented as a 12-dimensional vector. To select the most appropriate parameter set and prevent overfitting froma specific place, this paper uses the mean parameter set a^' of multiple parameter sets(a_1,a_2,…,a_N) as, a^' = min_b∈{ a_1,a_2,… ,a_N }∑_i=1^N k_id(a_i,b) , where N is the number of parameter sets, and d(a_i,b) indicates the Euclidean distance between two parameter sets (a_i and b). k_i is the confidence as, k_i = c_i,text/∑_j=1^Nc_j,text , i=1,2,3,⋯,N, where c_(i,text) is the texture cost function of a time stamp.§ OUR DATASET Our data set was collected by four fisheye cameras mounted on cars. The dataset consists of 40 groups of videos. Each group of videos includes four video streams, front, back, left, and right. The frame rate is 25 fps.Our dataset is captured by multiple sets of fisheye cameras with different camera parameters, and the camera system is installed on several vehicles for collection. As a result,the GT of the extrinsic parameters in the dataset are various, as illustrated in Figure 9.The frames of each video share the same set of extrinsic parameters and GT, which are estimated using a chessboard.The type of cameras is 2 million pixel high-definition panoramic waterproof lens from Shenzhen Zhongke Nanguang Technology Co., Ltd. The FOV of the camera is 190 degrees and the Image sensor type is IMX307 from Sony.Lane lines, as well as the vehicle intrinsic and extrinsic parameters of each frame, are stored in the corresponding JSON file. Besides, all lane line annotations are manual. Each lane line is represented by two sets of edge points and is marked with the itstype, which includes single white solid line, single white dotted line, single yellow solid line, single yellow dotted line, double white solid line, double yellow solid line, double yellow dotted line, double white solid dotted line, double white solid line, white yellow solid line.§ EXPERIMENTS In real-world traffic scenario, when driving in the dotted lane, four cameras cannot capture the valid lane at the same time. Our dataset includes the solution. We reserve the lane lines that appeared in a short period. This is because driving direction can be assumed to be straight for a short period when diving to a straight lane.We tested the pipeline with our dataset. We select clips from various environments as input. Partial results are shown in Figure 10. As can be seen from the results, our method has stable performance in various traffic scenarios.In order to verify the effectiveness of the proposed method, we reproduce the scheme of Choi et al. <cit.>, OCPO<cit.> and WESNet<cit.> for comparison. Each method initializes the position parameter with 5-centimeter error from GT in each world axis. Table 1 shows the result. the average camera angle error equals the average of three Euler angle. The average position error equals Euclidean distance between estimation and ground truth in the world coordinate system.we provide detailed evaluation results of the calibration methods for each type of scenario. Table 1 shows the average angle error and average position error. The angle unit and length unit are degree and meter. The best results are highlighted in bold. From Table 1, it can be seen that compared with all counterparts, our method shows an overwhelming performance on all data groups. Choi’s method utilized lane marking and only estimated the camera angles. For the left and right camera, Choi’s method performs unstably due to the error of camera height. In contrast, our method estimates height and pitch together by constraining the width of projected lane marking and aligning the projected texture. Specifically, OCPO only takes the photometric error as the guidance to correct camera poses. However, in low-texture environment, the photometric error is mainly affected by noise rather than inaccurate poses of cameras. In this case, OCPO can yield extrinsic parameters with low photometric errors but their accuracy cannot be guaranteed.WESNet follows a weakly supervised learning framework. In the fine-tuning stage, WESNet utilizes photometric error to train the network, which will result in similar errors to OCPO.In conclusion, the excellent accuracy of extrinsic parameter estimation and the generalization capability of our method has been nicely demonstrated.In Table 2, we conducted the ablation experiment to prove the effectiveness of each component we propose. The lane marking re-detection improves extrinsic estimation by enhancing lane marking detection accuracy of each camera. The texture procedure also contributes to calibration. Robustness to intrinsic disturbanceIn order to evaluate the robustness of our method of the accuracy of intrinsic parameters, we empirically alter the intrinsic parameters of each camera. The disturbance can be represented as an intrinsic disturbance factor d . We added it to the focal length of the camera. The disturbed intrinsic matrix K_C_i^d of camera C_i can be expressed as K_C_i^d = [ f_x+d 0 c_x; 0 f_y+d c_y; 0 0 1 ].We test the mentioned method and our scheme with different d’s setting and recorded the result. The result is shown in Figure 11. It illustrates that our method is able to calculate a more accurate result as long as the intrinsic disturbance is higher. The reason is that the intrinsic parameter are disturbed, the pixel is farther from principal point and yields higher distortion error. Our scheme utilizes information about lane as a strong constraint. Lane lines mostlydistributed in the center of fisheye image while the RoI distributed at the side of fisheyeimage. Therefore, lane-base calibration is more robust than texture-base calibration. Besides, based on our experience, the camera’s focal length variation caused by the natural collisions or bumpsis generally less than 5 pixels. Therefore, it can be concluded that our method is robust to the variations in intrinsic parameters.Execution TimeTable 3 shows the execution time for four main modules of the proposed method. These times were measured on an Intel Core i7-10875 CPU using only a single core that has 2.30GHz. Note that among four modules the proposed method, only the lane marking detection module is required to be processed in real time. The other three modules need to be processed only once after a sufficient number of lane markings are gathered. In Table 3, the lane marking detection module requires 29.74 ms to process four images acquired from four cameras of the AVM system, which means that this module can process more than 30 frames per second in real time. The other three modules require a total execution time of 315.54 ms. Since those modules need to be processed only once after gathering lane markings, their execution times do not hinder the proposed method from operating in real time. § CONLUSION In this paper, a new dataset and practical method are proposed to calibrate the camera attitude using lane markings and textures. To facilitate the research on self-calibration for the surround-view system, we collected a new dataset with high-quality lane annotation across all the frames. The proposed method offers several advantages. First, it is suitable for the natural driving situation of vehicles, whether the vehicle is moving at low speed or high speed. Second, the method uses the corresponding lane lines fromadjacent camera images to relate each camera,generating a seamless aerial view of vehicles. The method is evaluated using image sequences captured under various actual driving conditions, and the results demonstrate good real-time performance. The experiment uses a four-camera system. Therefore, this calibration method is also applicable to the six-camera system with six or more cameras. In the six-camera system, there will be two cameras at the front and two cameras at the rear of the vehicle when the vehicle is driving in the lane. Lane lines of two sides can be capture simultaneously.By treating the camera that captures two lane lines are as the "front and rear views angles" in the calibration process, and the camera that captures one side lane line as the "left and right views angles", this method can be used for camera calibration and circular projection splicing.§ DATA AVAILABILITYThe datasets associated with the current study are available upon reasonable request from the corresponding author.§ CONFLICT OF INTERESTThe authors certify that there are no actual or potential conflicts of interest in relation to this article. To reference line numbers in an algorithm, consider the label declared for the line number 2 of Algorithm <ref> is . To cross-reference it, use the commandfor which it comes up as line <ref> of Algorithm <ref>. | http://arxiv.org/abs/2312.16499v1 | {
"authors": [
"L Qin",
"C Lin",
"S Huang",
"S Yang",
"Y Zhao"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231227101206",
"title": "Camera calibration for the surround-view system: a benchmark and dataset"
} |
Article Title]Noninvasive megapixel fluorescence microscopy through scattering layers by a virtual reflection-matrix1]Gil Weinberg These authors contributed equally to this work.1]Elad Sunray These authors contributed equally to this work.[1]Ori [email protected] [1]Institute of Applied Physics, The Hebrew University of Jerusalem, Jerusalem, 9190401, Israel [ *=====§ ABSTRACT Optical-resolution fluorescence imaging through and within complex samples presents a significant challenge due to random light scattering, with substantial implications across multiple fields.While significant advancements in coherent imaging through severe multiple scattering have been recently introduced by reflection-matrix processing, approaches that tackle scattering in incoherent fluorescence imaging have been limited to sparse targets, require high-resolution control of the illumination or detection wavefronts, or a very large number of measurements. Here, we present an approach that allows direct application of well-established reflection-matrix techniques to scattering compensation in incoherent fluorescence imaging.We experimentally demonstrate that a small number of conventional widefield fluorescence-microscope images acquired under unknown random illuminations can effectively construct a fluorescence-based virtual reflection matrix. This matrix, when processed by conventional matrix-based scattering compensation algorithms, allows reconstructing megapixel-scale fluorescence images, without requiring the use of spatial-light modulators (SLMs) or computationally-intensive processing.§ INTRODUCTION Fluorescence imaging is an essential tool in noninvasive biomedical research. Yet, its application for imaging in complex samples is challenging due to random light scattering, limiting the application of optical microscopes. Inherent scattering of light in biological specimens significantly reduces the sharpness and clarity of images, making it extremely difficult to obtain optical-resolution fluorescence images noninvasively <cit.>. Recent advancements to overcome this challenge include speckle-correlation imaging <cit.>, which relies on the inherent isoplanatism of multiple scattering in a limited angular range, commonly referred to as the optical memory effect <cit.>. Notable examples include computational approaches that retrieve the target object from the captured-image autocorrelation, as done in stellar speckle interferometry <cit.>. However, they require capturing a number of speckle grains that is considerably larger than the number of object resolution cells and rely on iterative phase retrieval, which lacks guaranteed convergence for complex targets.The reconstruction convergence challenge can be overcome by deterministic bispectrum reconstruction <cit.>, leveraging the principle of closure phase from radio-astronomy but still requiring a very large number of speckle grains to be captured and computationally intensive processing.A promising path for noninvasive imaging through complex media is made possible by coherent -based techniques that are based on phase-sensitive measurements of scattered coherent fields <cit.>. In these works, computational phase correction allows the undoing of scattering and reconstruction of scattering-free images of complex reflective targets. While these phase-sensitive methods have proven effective for coherent reflective imaging <cit.>, they are incompatible with fluorescence imaging since the phase of fluorescence signals from extended objects is undefined.Several attempts have been made to utilize matrix-based methods in the incoherent case, each with distinct challenges and requirements. Several techniques apply physical correction of either the excitation or detection path, or both, utilizing spatial light modulators (SLM).These approaches rely on measuring the feedback from different excitation patterns, for example, by maximizing the image variance <cit.> or applying an incoherent iterative phase conjugation <cit.>.Correction of the detection path can be obtained by improving the image quality either iteratively <cit.>, or using a neural-networks to find the optimal correction <cit.>. Matrix decomposition methods, particularly non-negative matrix factorization (NMF), while computationally very demanding, are helpful when the target is spatially very sparse, as they demand the number of acquired frames that is significantly larger than the number of bright resolution cells <cit.>. Here, we introduce a novel framework that allows to computationally undo the effects of severe scattering in any conventional widefield fluorescence microscope without requiring an SLM, target sparsity, or a large number of captured frames.Our approach is based on constructing a matrix analog of the coherent from a few tens of fluorescence camera frames obtained under unknown random illuminations. This ”, obtained through a straightforward cross-correlation computation, lends itself to processing by any of the well-established matrix-based scattering compensation schemes <cit.>, as we experimentally demonstrate on isoplanatic-scattering samples.Specifically, by treating each camera pixel as a random variable and calculating the covariance matrix between camera pixel intensities over random unknown illuminations, a reflection-like matrix with dimensions of N × N where N>10^6 is the number of camera pixels is formed. Importantly, we demonstrate that the matrix can be formed even when the number of measurements, M, is smaller by more than four orders of magnitude than the number of camera pixels. This covariance analysis resembles Green's function retrieval process in passive correlation imaging investigated in seismology and acoustics <cit.>, and the compressed time-reversal (CTR) reflection-matrix acquisition scheme <cit.>, but here for the first time for spatially-incoherent signals.To efficiently process the reflection matrix, which contains over a trillion (N^2 > 10^12) elements, and address the challenge of amplitude modulation in incoherent scattering, we propose a mathematically equivalent yet memory-efficient algorithm to CTR-CLASS (closed-loop accumulation of single scattering). This algorithm necessitates the storage of only O(MN) elements. Additionally, it includes an extra post-processing step to correct distortions in the Modulation Transfer Function (MTF = |OTF|). We experimentally demonstrate reconstructions of >10^6 pixels widefield fluorescence images of non-sparse targets, correcting more than 10^4 k-space modes, using only 150 acquired camera frames. § RESULTS §.§ PrincipleThe principle of our method, together with a numerically simulated example (see Methods), is schematically shown in Fig. <ref>. Our approach for imaging a fluorescence target behind a highly scattering layer is based on a conventional widefield fluorescence microscope that images the fluorescent sample under several random unknown illuminations (Fig. <ref>A). In the presented implementation, the random illumination is provided in epi-configuration by passing a continuous wave (CW) laser (Fig. <ref>A, in blue) through a rotating diffuser.An sCMOS camera captures M images of the scattered fluorescence light through a dichroic mirror and detection filters in a conventional 4f imaging setup. The captured frames are thus identical to those obtained by a dynamic speckle illumination microscope <cit.>.Following the acquisition process, we form a virtual 'reflection matrix' from the covariance between the different camera pixels (Fig. <ref>B, C). This covariance matrix can be straightforwardly computed from the raw low-contrast camera frames by first forming an N× M 'measurement matrix', A, where each of its M columns is a single raw camera frame (Fig. <ref>B).As we show below, the covariance matrix, R, of this measurement matrix A, which is simply the cross-correlation of A, after subtraction of its row-wise mean ⟨⃗A⃗⟩⃗: R=(A-⟨⃗A⃗⟩⃗)(A-⟨⃗A⃗⟩⃗)^T, can be interpreted as a virtual 'reflection matrix' for fluorescence. In this virtual reflection matrix, each camera pixel serves both as a detector and a virtual illumination point source. Each column (or row) of R thus provides the camera image that will be captured when a virtual illumination originates from each single camera pixel (Fig. <ref>D).As in a conventional reflection matrix, the virtual 'reflection matrix' diagonal provides a 'confocal' image (Fig. <ref>E). While this confocal-like image, which is equivalent to the variance analysis of dynamic speckle illumination microscopy <cit.>, or speckle-SOFI <cit.>, is sharper and has higher contrast than the raw camera frames, it is still severely distorted and blurred by scattering. High-resolution scattering compensation must be employed to reconstruct the hidden fluorescent target.Elegantly enough, as we show below, the virtual 'reflection matrix', R lends itself to analysis and processing by any of the well-established scattering-correction techniques developed for the conventional coherent complex-valued reflection matrix <cit.>. As a demonstration, we apply a modified-CLASS algorithm <cit.> (see Fig. <ref>) to effectively correct the effects of isoplanatic scattering in this numerical demonstration. Indeed, the CLASS algorithm provides both the corrected image (Fig. <ref>F), which restores the high-resolution fine features of the target (Fig. <ref>G), and an estimation of the scattering PSF (Fig. <ref>H).§.§ Covariance matrix equivalence to a virtual 'reflection matrix' Here, we establish the mathematical foundation supporting the preceding claims. Assuming isoplanatism, the relationship between each camera frame, I_m(r⃗), under the m=1...M random speckle illumination, S_m(r⃗), and the fluorophores distribution of the target object, O(r⃗), is given by a convolution with the detection point-spread-function (PSF) P_det(r⃗):I_m(r⃗)=P_det(r⃗)*[O(r⃗)· S_m(r⃗)]By arranging the captured camera frames into columns of a 'measurement matrix' A (Fig. <ref>B), we can write Eq. <ref> in a similar way to <cit.>:A = P_detOSWhere S is a matrix with each of its columns containing the random (unknown) speckle illumination patterns at the object plane, O takes the form of a diagonal matrix with the object fluorescence distribution O(r⃗) as its diagonal elements, and P_det is a convolution matrix representing the (unknown) randomly distorted detection PSF. In the case of imaging through highly scattering layers, the PSF is a random unknown speckle intensity pattern. The goal of a noninvasive imaging technique is to retrieve O (and potentially also P_det) from A, where P_det,O and S are unknown matrices.Given that S consists of uncorrelated speckle-intensity patterns, the covariance matrix of S can be approximated by the identity matrix I. This covariance matrix can be calculated by multiplying S by its transpose after performing a row-wise mean reduction:ŜŜ^T ∝I, where Ŝdef = S (𝐈-1/M1) is the mean subtracted speckle illumination matrix (1 is a matrix comprising entirely of ones, see supplementary S1).Notably, using the decomposition of Eq. <ref>, it becomes apparent that subtracting the row mean from the known measurement matrix A – that is, removing the temporal mean value from each camera pixel across the speckle illuminations – is equivalent to subtracting the row mean from the unknown illumination patterns: Âdef = A - ⟨⃗A⃗⟩⃗≡A(𝐈-1/M·1) = P_detOS(𝐈-1/M·1)= P_detOŜAs a direct result, calculating the covariance matrix of the matrix A, provides a fluorescence virtual 'reflection matrix', where both the input and output distortions are given by the distorted detection PSF, and the random speckle illuminations play no role:R=ÂÂ^T = P_detOŜŜ^TOP_det^T ≈P_detO_effP_det^T Where O_effdef=O^2, in a similar fashion to the coherent case of CTR-CLASS <cit.>.The reason why R can be treated as a virtual 'reflection matrix' originates from the analogy to the conventional coherent reflection matrices <cit.>: In an isoplanatic coherent optical system, the coherent reflection matrix is given by: R_coh=P_detO_cohP_ill^T where P_det and P_ill are complex-valued convolution matrices representing the convolution with the detection and illumination coherent PSFs, respectively, and O_coh is a diagonal matrix with the object's reflectivity function on its diagonal. This equivalence in mathematical formulation thus allows for applying the well-established algorithms designed for coherent matrix imaging in the fluorescence imaging case. Specifically, one can apply the state-of-the-art scattering compensation algorithms of CLASS <cit.> and distortion-matrix <cit.>. We demonstrate the effectiveness of such processing by applying a modified version of the CLASS algorithm to incoherent fluorescence imaging. §.§ Modified and memory-efficient CLASSThe standard CLASS algorithm corrects the effects of scattering in the coherent imaging case by applying a phase-only correction in the Fourier domain of the object plane <cit.>. CLASS has been proven extremely effective in undoing aberrations and scattering of tissue <cit.>, mouse skull <cit.>, and multicore fibers <cit.>, both in reflection imaging and second-harmonic generation (SHG) microscopy <cit.>. The correction phase is found from correlations between the coherent reflection matrix columns (see Supplementary section S1). In incoherent imaging, isoplanatic scattering is manifested as a multiplication by a complex-valued distorted optical transfer function (OTF), which is the Fourier transform of the incoherent PSF. Since the OTF is the autocorrelation of the complex pupil function <cit.>, in the case of random scattering, strong amplitude modulations are present within the OTF. Thus, applying the phase-only standard CLASS correction still results in a residual significant 'haze' in the reconstructed image (Fig. <ref>). To mitigate this, we have extended the CLASS algorithm to estimate not only the k-space phase distortions but also the amplitude modulations. This is achieved by utilizing the virtual 'reflection matrix' in the Fourier domain (see Methods). The phase correction is applied as in conventional CLASS, and the amplitude compensation is applied by a regularized Fourier reweighting (see Methods and Supplementary Section 1). We term this modification I-CLASS (Incoherent-CLASS).While in principle, one can apply CLASS on the N × N matrix R, where N is the number of camera pixels, in practice, when considering megapixel scale images, the resulting 'virtual reflection matrix' R would require more than 1TB of memory. To allow CLASS processing of such megapixel scale images, we have developed an alternative method to perform the CLASS iterations directly on the matrix Â, avoiding the direct computation of R (see supplementary S1). Since  is of dimensions N × M, whereM≈150≪ N, this approach significantly reduces the memory requirements to O(MN) from the previously required O(N^2), thereby enabling the algorithm to run on over 10^6 pixels from hundreds of images using standard computer hardware.§.§ Experimental results We demonstrate the effectiveness of I-CLASS using the experimental setup schematically depicted in Fig. <ref>A.As an initial proof-of-concept experiment and to highlight the differences between the phase and amplitude correction of I-CLASS and the phase-only correction of CLASS, we consider the case of near-perfectly isoplanatic scattering. This is achieved by positioning a diffuser at the back focal plane of the microscope objective. In this configuration, the target object comprises randomly distributed fluorescent beads with an average size of 10 μ m (Fig. <ref>). As a starting point, we display the uncorrected 'virtual-confocal' image obtained from the diagonal of R (Fig. <ref>A). Explicitly, the virtual-confocal image is the standard deviation of each camera pixel and is equivalent to dynamic speckle illumination microscopy <cit.>, or speckle-SOFI <cit.>. As expected, the image is a very low-contrast diffusive blur without apparent features, which does not resemble the target object (Fig. <ref>D).Applying conventional (phase-only) CLASS correction yields a reconstruction across the entire field of view that is accompanied by a strong diffusive hazy background (Fig. <ref>B), and an estimation of the speckled PSF from the phase-only correction (Fig. <ref>E).Applying I-CLASS provides a high-contrast, high-fidelity reconstruction of the target object (Fig. <ref>C) as imaged without the diffuser (Fig. <ref>D), as well as an estimation of the scattering PSF.To validate the technique in more realistic imaging settings, we tested its performance when various scattering layers were placed between the objective lens, as shown in Fig.<ref>A. We studied two types of scattering layers: the same holographic diffuser used in Fig. <ref> and a slice of ∼ 300-400μ m thick chicken breast tissue. We tested each of the scenarios using two different microscope objectives: an × 20 0.6 NA objective and a lower magnification and lower NA objective (× 4 0.1 NA), which allows widefield imaging of a larger field of view (FoV).The results of this study are presented in Fig. <ref>: The leftmost row (Fig. <ref>A,D,G,J) displays the uncorrected virtual-confocal images, all displaying low-contrast blurry images. The center column (Fig. <ref>B,E,H,K) presents the I-CLASS corrected images, displaying high contrast and high-resolution images that reconstruct the fine details of the targets, composed of 2 μ m diameter and 10μ m diameter fluorescent beads (Fig. <ref>C,F,I,L). Lastly, we validate the I-CLASS reconstruction on biological specimens, specifically examining flower pollen grains. In Figure <ref>, we provide images of two flower pollens: Tanacetum coccineum (A-C) and Dimorphotheca ecklonis (D-F), both positioned behind the same 1^∘ holographic diffuser used in the previous experiments in Figs. <ref>, <ref>. These samples were imaged using different objectives, 0.28 NA × 10, and 0.13 NA × 4, respectively.We first present the uncorrected virtual-confocal images (Fig. <ref>A,D), characterized by their low-contrast and blurry appearance. Subsequently, the images post I-CLASS correction (Fig. <ref>B,E) reveal the pollen grains' details and distinct separation as imaged without the scattering layer (Fig. <ref>C,F).§ DISCUSSIONWe have introduced and experimentally demonstrated an approach to computationally correct severe scattering distortions in a conventional widefield fluorescence microscope setup.As our approach is based on constructing a virtual reflection-matrix from a small number of measurements, it allows the applications of different scattering-compensation techniques developed for reflection-matrix analysis. Similar to recently introduced techniques <cit.>, our approach significantly reduces the number of required measurements by using random uncorrelated illuminations.In addition, our memory-efficient implementation for CTR-CLASS <cit.> significantly reduces the memory requirements by avoiding the explicit calculation of the full reflection matrix, resulting in a reduction of over four orders of magnitude in memory requirement.In contrast to recently introduced novel approaches for scattering compensation in fluorescence imaging, our approach does not require any control of the illumination <cit.> or detection paths <cit.>, and as such, does not require any SLMs. In addition, our approach does not rely on any prior knowledge, such as the object sparsity, and is thus not limited to localization-based image reconstruction or sparse targets <cit.>.Importantly, the reflection-matrix-based scattering compensation algorithm is physically tractable and requires a small number of measurements that do not depend on the sparsity of the imaged target <cit.>. When there is prior knowledge of the object sparsity, applying singular value decomposition (SVD)-based denoising to the virtual 'reflection matrix' can be applied, as in the coherent reflection-matrix case <cit.>. Interestingly, the SVD filtering can be applied directly on the smaller dimension measurement-matrix Â, since it shares singular values and left singular-vectors with the virtual 'reflection matrix' (Eq. <ref>).We have demonstrated computational scattering correction. Physical correction of the detection and/or excitation paths using SLMs may also be possible by retrieving the coherent pupil function from the phase and amplitude I-CLASS estimated correction through a phase-retrieval step <cit.>.The main limitation for applying the I-CLASS correction is the scattering function's assumed isoplanatism (or the optical 'memory effect'). Thus, for a perfect correction, the lateral and axial extents of the target object should be smaller than Δ x_mem laterally, <cit.>, and Δ z_mem≈λ / NA^2, axially, where λ is the wavelength and NA is the effective numerical aperture of the detection path <cit.>.Extending the corrected FoV beyond the memory effect would be highly valuable for many applications. When dealing with lightly-scattering media, it's possible to effectively extend the field of view for correction by separating the isoplanatic patches of the object on the image plane, achieved by cropping the image <cit.>. Unfortunately, this method is unsuitable in highly scattering media where different isoplanatic patches merge within the image plane. Promising proposals have been recently introduced for anisoplanatic corrections using the coherent reflection-matrix <cit.>. In the specific context of the I-CLASS incoherent virtual reflection matrix, extending the corrected FoV axially presents a greater challenge. This is because traditional axial sectioning methods, like coherence gating, are not applicable in incoherent situations.Finally, we note that similar to dynamic speckle illumination microscopy <cit.> and speckle-based SOFI <cit.>, the lateral resolution of I-CLASS can surpass the diffraction-limited resolution of a widefield microscope by a small factor. This can be understood from the fact that I-CLASS corrects each of the M fluorescent images, but instead of calculating the mean of the corrected images, the final image is their temporal standard deviation. This is numerically demonstrated on a sample that contains clear, sufficiently small features in Supplementary Fig. S3. § MATERIALS AND METHODS §.§ Numerical Simulation To generate the numerical results presented in Fig. <ref>, a target object fluorescence pattern sourced from an open-access fluorescence microscopy dataset <cit.> was used.The target object pattern was multiplied by M=150 random intensity speckle patterns generated numerically. Each resulting frame was subsequently convoluted with a randomly generated speckled PSF. This process created M=150 simulated images representing the frames captured by a conventional widefield fluorescence microscope under random speckle illuminations. §.§ Experimental SetupThe details of the experimental setup depicted in Fig. <ref> are as follows: The laser source is a 40mW, 488nm continuous-wave (CW) laser (Oxxius LBX-488). The random speckle illuminations are generated by rotating a holographic diffuser (RD, NEWPORT 0.5^∘ + RPC Photonics EDS-1^∘) placed at the excitation path ≈ 20cm from the microscope objective. In Fig. <ref>, and Fig. <ref>A-C,G-I the microscope objective is RMS4X, 0.1NA and Fig. <ref>D-F,J-L 20X EO HR Edmund Optics, 0.6NA.In Fig. <ref>A-C, the microscope objective is MY10X-803, 0.28NA and Fig. <ref>D-F RMS4X-PF 4X, 0.13NA.Images are captured by an Andor Neo 5.5 sCMOS camera, integrated into a 4f imaging setup with a 180mm (Results in Fig. <ref>, and Fig. <ref>A-C) or 300mm tube lens (Results in Fig. <ref>D-L, and Fig. <ref>). The light is filtered with a dichroic mirror (Thorlabs DMLP490R) and emission filter (Thorlabs MF525-39). The target is made by placing fluorescent beads (Fluoresbrite YG Microspheres 10μ m for Fig. <ref>, and Spherotech Fluorescent Yellow Particles 2μ m) on a cover glass positioned at the focal plane of the objective lens. In Fig. <ref>, Fig. <ref>A-F and Fig. <ref> a holographic diffuser (RPC Photonics EDC1) serves as the scattering layer, while in Fig. <ref>G-I, and J-L the scatterer is a 300 and 400μ m thick slices of chicken breast respectively.§.§ Experimental parameters The experimental parameters for the results displayed in Figs. <ref>, <ref>, <ref>, including camera exposure time, image pixel count, and the scatterer's distance from the target, are outlined as follows: In Fig. <ref>, images are cropped to 800x800 pixels with a 9-second camera exposure, and the diffuser is positioned at the objective lens's back-focal plane. In Fig. <ref>A-C, the frames are first low-pass filtered from 1800x1800 to 1600x1600 pixels and are then cropped to 400x400 pixels. Each frame is captured with an exposure time of 7 seconds, with the diffuser 13mm from the object. Fig. <ref>D-F maintains a 400x400 pixel resolution, but the exposure time is reduced to 200 ms, with the diffuser 7mm from the object. Fig. <ref>G-I shows images low-pass filtered from 500x500 to 300x300 pixels, then cropped to 100x100 pixels with a 5-second exposure, and a 300μ m chicken breast is positioned 5mm from the object. Fig. <ref>J-L feature images cropped to 250x250 pixels with a 500 ms exposure, and a 400μ m chicken breast is 2.5mm from the object.In Fig. <ref>A-C, images are cropped to 300x300 pixels with a 4-second exposure per frame, with the diffuser 4mm from the object. In Fig. <ref>D-F, the displayed images have been cropped to 200x200 pixels. Each frame is captured with an exposure time of 7 seconds, with the diffuser 5mm from the object. In the measurements of Fig. <ref>D-F, Fig. <ref>J-L, and Fig. <ref>D-F, each frame was pre-processed with Hann windowing to mitigate signal from neighboring beads and establish a zero boundary condition.All experiments utilized M=150 illuminations for reconstruction, and the number of necessary illuminations is explored in supplementary S2. The algorithm run time on a commercially available GPU (Nvidia RTX4090, 24 GB) was approximately ∼ 200ms per iteration for 150 camera frames at a resolution of 1400x1400 pixels and around ∼ 50ms seconds per iteration for 150 camera frames at a resolution of 700x700 pixels. §.§ I-CLASSThe full description of the I-CLASS memory-efficient, phase, and amplitude scattering compensation algorithm is fully given in Supplementary Section S1. Here, we provide a concise explanation of the I-CLASS algorithm.§.§.§ Memory-efficient CLASS iterations The memory-efficient equivalent method for calculating the CLASS iterations using the N × M matrix  without the explicit computation of the N × N matrix R can be calculated from the following formula for the t-th iteration (see Supplementary S1):z⃗_t+1=∑_q=1^M(Ã_t^* ⊙ ((Ã_t^(ud)^**Ã_t^(lr))_:,M-1⋆Ã_t^(ud)))_:,q where à as the 2D Fourier transform of Â. Here, Ã^(ud) is the matrix à with each of its columns flipped upside-down, and Ã^(lr) is à with its rows flipped left-to-right. In addition, we denote the element-wise Hadamard product as ⊙, the 2D convolution as *, and the 2D correlation operator as ⋆. X^* is the element-wise complex-conjugation operator on the matrix X, and X_:,q is the q-th column of X.As in the conventional CLASS algorithm <cit.>, throughout the iterative process of the I-CLASS algorithm, we solely update the k-space (OTF) phase correction with the correction:Ã_t+1=diag{e^iz⃗_t/|z⃗_t|}Ã_twhere the exponential and division operations are element-wise and t=1...T is the iteration number. The phase-corrected image is then reconstructed by inverse Fourier transforming back the corrected Fourier matrix à into  and taking the pixel-wise square root of the sum of the columns of |Â|^2 which is equivalent to the scattering-corrected virtual-confocal image (see supplementary S1), which can be written mathematically as diag{R} = ∑_q=1^M (Â⊙Â^*)_:,q, and the CLASS corrected object O⃗_CLASS is given by taking an element-wise square root as seen from Eq. <ref>. §.§.§ I-CLASS k-space amplitude correctionThe k-space amplitude correction in the I-CLASS algorithm consists of two primary steps: (1) estimating the MTF, up to a scaling factor, using MTF≡√(diag{R̃}) = √(∑_q=1^M (Ã⊙Ã^*)_:,q) (See supplementary S1). (2) The estimated MTF is then utilized in a regularized Fourier reweighting on the last iteration CLASS corrected object O⃗_CLASS: Õ⃗_I-CLASS_i≡Õ⃗_CLASS_i/MTF_i/max_qMTF_q +σ where σ is the regularization parameter, and Õ⃗_CLASS is the Fourier transformed O⃗_CLASS.§.§ FundingThis project was supported by the H2020 European Research Council (101002406). §.§ Competing interestsThe authors declare no competing interests. §.§ Code and data availabilityThe code and sample datasets for the I-CLASS algorithm and measurements are available in https://github.com/Imaging-Lab-HUJI/Fluorescence-Computational-Imaging-Through-Scattering-Layers. Additional data related to this paper may be requested from the authors. PART:*Supplementary Materials§ MODIFIED CLASS ALGORITHM (I-CLASS) §.§ Forward Model In the context of incoherent imaging of fluorescence targets through a disordered media, where an object is located within the optical memory effect regime <cit.>, the image intensity can be described as a convolution of the intensity impulse response (PSF) with the ideal image intensity <cit.>. For coherent illumination of a fluorescence target object, we obtain the following: I(r⃗) = (|E_in(r⃗)*P^(E)_ill(r⃗)|^2O(r⃗))*P(r⃗) . Where P^(E)_ill, P_det are the coherent illumination A-PSF and incoherent detection PSF, respectively, and O(r⃗) is the intensity distribution function of the object. Thus, under coherent random illuminations, one can decompose a fluorescence measurement matrix in the r-basis in the following manner: A = P_detO|P^(E)_illS^(E)|^2. Notably, P^(E)_ill and P_det represent convolution matrices, the optical system input illumination coherent A-PSF and detection incoherent PSF, respectively. Furthermore, the matrix O is described by a diagonal matrix, carrying the elements of O(r⃗) on its diagonal, and S^(E) contains columns that represent the random fields in the illumination plane. For simplicity let us denote the matrixSdef = |P^(E)_illS^(E)|^2. Since S is composed of independent intensity speckle-patterns originating from the same source, one can show that since the patterns are independent, we get that the correlation between a pair of pixels a,b and speckle patterns i,j is I_i_aI_j_b=I̅_̅a̅I̅_̅b̅+δ_a,bδ_i,jI̅_̅a̅^2 using I^2=2I̅^2, with different values due to the speckle pattern envelope <cit.>. Leading to (SS^T)_i,j = ∑^M-1_k=0I_k_iI_k_j = MI̅_̅i̅I̅_̅j̅+Mδ_i,jI̅_̅i̅^2.By using (SS^T)^2_i,j=∑^M-1_k,k'=(0,0)I_k_iI_k_jI_k'_iI_k'_j = (M^2-M)I_iI_j^2_k≠ k'+MI^2_iI^2_jand I^n_i=n!I_i^n, one can show that:Δ(SS^T)_i,j/(SS^T)_i,j∼1/√(M) (Δ X denotes the standard deviation of X). Thus, we conclude that to a good approximation SS^T≈ Mdiag(I̅⊙I̅) + MI⃗̅⃗I⃗̅⃗^T where I⃗̅⃗ is the vector that holds the mean intensity values for each pixel (and ⊙ is the element-wise Hadamard product). Hence, by implementing pixel-wise mean reduction, we define Ŝdef = S (𝐈-1/M·1⃗1⃗^T),where 1⃗ represents a column vector comprising entirely of ones (∀ i∈[N] : 1⃗_i=1). Importantly, this transformation renders Ŝ uncorrelated: ŜŜ^T = S(𝐈-1/M·1⃗1⃗^T)^2S^T = S(𝐈-1/M·1⃗1⃗^T)S^T = SS^T-1/M(S1⃗)_≈ MI⃗̅⃗(S1⃗)^T_≈ MI⃗̅⃗^T≈ Mdiag(I̅⊙I̅) + MI⃗̅⃗I⃗̅⃗^T- MI⃗̅⃗I⃗̅⃗^T = Mdiag(I̅⊙I̅) leading to the approximation ŜŜ^T ≈D, for Ddef =Mdiag(I̅⊙I̅). Notably, from a mathematical perspective, it becomes apparent that by subtracting the temporal mean from the measurement matrix A, we effectively replicate an illumination pattern with uncorrelated illuminations Ŝ: Âdef = A(𝐈-1/M·1⃗1⃗^T) = P_detOS(𝐈-1/M·1⃗1⃗^T)= P_detOŜand from the calculation of the covariance matrix, we overall get:R=ÂÂ^T = P_detOŜŜ^TOP_det^T ≈P_detO_effP_det^Tfor O_effdef=O^2 D, an incoherent fluroesecent analog to a virtual reflection matrix.Utilizing the multiplication-convolution relationship in k-space, we can decompose the Fourier transform of the reflection matrix into: R̃≡ℱRℱ^†≈ℱP_detO_effP_det^Tℱ^† =ℱP_detℱ^†ℱO_effℱ^†(ℱP_detℱ^†)^†=P̃_detÕ_effP̃^†_det(where ℱ represents the unitary DFT matrix). In this Fourier domain representation, P̃_det which is the Fourier transform of P_det, takes on a diagonal form, reflecting the fact that the convolution operations in real space become simple multiplications in Fourier space. In the case of a unitary phase-only distortion (which is the model in the coherent case), these diagonal matrices contain the random phases e^iϕ_det(k⃗) and e^-iϕ_det(k⃗), respectively, which the CLASS algorithm aims to correct. On the other hand, the matrix Õ_eff which is the Fourier transform of O_eff, now adapts the form of a Toeplitz convolution matrix, with the Fourier components of the object. §.§ CLASS algorithm The CLASS algorithm <cit.> aims to correct the distortion, P̃_det, and retrieve the ideal image intensity Õ_eff. Mathematically, this requires finding the inverse of P̃_det which, being modeled as a unitary transformation with only phase aberrations, is equivalent to finding P̃_det^†.The standard CLASS algorithm, handling a full system reflection matrix in the form ofR=P̃_detÕP̃_ill, addresses a single distortion during each iteration, alternately handling P̃_det and P̃_ill. However, in CTR-CLASS <cit.> where P̃_ill = P̃_det, exclusively tackles the "right" matrix P̃_det in each iteration and fixes the matrix from both sides.Given that we model the matrix R̃ as the product of a Toeplitz matrix Õ_eff and a diagonal matrix P̃_det, it follows that if we displace each column proportionally to its index, the resultant matrix should exhibit columns of equal values which contain the spectrum of Õ_eff(k⃗), albeit with distinct global phases for each column. Consequently, if we compute the average of these columns, we obtain a reasonably accurate estimation of Õ_eff(k⃗), given that the phase averages out.Now that we possess an approximation of Õ_eff(k⃗), we can determine the overall phase for each column. This can be achieved by calculating the correlation angle between the j-th column and the estimated Õ̂_eff(k⃗): ϕ̂_det(k_j) = arg{<Õ̂_eff^*(k⃗)Õ_eff(k⃗)e^iϕ_det(k_j)>_k}by incorporating the terms e^-iϕ̂_det(k⃗) into a diagonal matrix P̂̃̂_det we can fix R̃ by R̃_n=P̂̃̂_det^†R̃_n-1P̂̃̂_det. This iterative approach consistently leads to the convergence of the correct phase correction. We now write the iteration in a matrix notation, by defining:z⃗def =R̃R̃^T_n_s(R̃_n_s1⃗)^*=(R̃^†_n_sR̃_n_s1⃗)^*=1⃗^TR̃^†_n_sR̃_n_s With this notation, we obtain:ϕ̂_det(k_j)=z⃗_j/|z⃗_j|Here, R̃_n_s represents the shifted version of R̃^(ud), which can be expressed as:∀ i∈[2N-1] ∀ j∈[N]: R̃_n_s_i,j=∑^N-1_a=0∑^N-1_b=0S_i,j,a,bR̃^(ud)_n_a,busing S_i,j,a,b = δ_j,bδ_i,a+b . Additionally, we use R̃^(ud)_n_i,j = R̃_N-1-i,j i.e. flipping each column of R̃, a measure that was taken for the sake of index convenience. (By denoting R^(ud)_i,j = R_N-1-i,j we can get R̃^(ud) = Ã^(ud)Ã^†)§.§ Memory ComplexityGiven that we capture only M images from distinct random illuminations, each containing N pixels, where M<<N, it is advantageous to avoid explicitly constructing the matrix R as ÂÂ^†, an NxN, matrix, due to its high memory demand. All the necessary information should be encapsulated within R, which is of size NxM. As a result, we show an equivalent mathematical expression for a CLASS iteration using  without directly computing R.We note that since the DFT matrix ℱ is unitary, we can apply a two-dimensional Fourier transform to the measured images before including them in Â, thus obtaining R̃ = ℱRℱ^† = ℱÂÂ^†ℱ^† = ℱÂℱℱ^†Â^†ℱ^†. Therefore, we can concentrate on Ã=ℱÂℱ^†.First, we find the vector elements of z⃗ = 1⃗^TR̃^†_n_sR̃_n_s (Eq. <ref>): z_j = ∑^N-1_i=0(R̃^†_sR̃_s)_i,j = ∑^N-1_i=0∑^2N-2_m=0∑^N-1_k=0∑^N-1_l=0∑^N-1_a=0∑^N-1_b=0S_m,j,a,bR̃^(ud)_a,bS_m,i,k,lR̃^(ud)^*_k,l =∑^N-1_i=0∑^N-1_k=0∑^N-1_l=0∑^N-1_a=0∑^N-1_b=0R̃^(ud)^*_k,lR̃^(ud)_a,b∑^2N-2_m=0S_m,j,a,bS_m,i,k,l=∑^N-1_i=0∑^N-1_k=0∑^N-1_l=0∑^N-1_a=0∑^N-1_b=0R̃^(ud)^*_k,lR̃^(ud)_a,bδ_j,bδ_i,l∑^2N-2_m=0δ_m,a+bδ_m,k+l= (2N-1)∑^N-1_i=0∑^N-1_k=0∑^N-1_a=0R̃^(ud)_a,jR̃^(ud)^*_k,iδ_a+j,k+i∝ ∑^N-1_i=0∑^N-1_k=0∑^N-1_a=0∑^M-1_n=0∑^M-1_m=0Ã^(ud)_a,nÃ^*_j,nÃ^(ud)^*_k,mÃ_i,mδ_a+j,k+i =∑^N-1_k=0∑^N-1_a=0∑^M-1_n=0Ã^(ud)_a,nÃ^*_j,n∑^M-1_m=0Ã^(ud)^*_k,mÃ_a+j-k,m∑^N-1_i=0δ_a+j,k+i=∑^N-1_a=0⟨Ã⃗_j,:,Ã⃗^(ud)_a,:⟩∑^min(N-1,a+j)_k=max(0,a+j-(N-1))⟨Ã⃗^(ud)_k,:,Ã⃗_a+j-k,:⟩ where we denote A⃗_c,: as the c-th column of A. By denoting A^(lr)_a,b=A_a,M-1-b and noticing that using full sized (using zero-padding) convolution we get:(A^(ud)^**A^(lr))_a+j,M-1 = ∑^min(N-1,a+j)_k=max(0,a+j-(N-1))⟨A^(ud)_k,:,A_a+j-k,:⟩Plugging Eq.<ref> into Eq.<ref>, we get: z⃗=∑^N-1_a=0(Ã^*Ã⃗_a,:)⊙(Ã^(ud)^**Ã^(lr))_a:a+N,M-1Where ⊙ denotes the Hadamard Product (element-wise multiplication) between two vectors. And the phase correction for each iteration is received by ϕ̂⃗̂ = z⃗/|z⃗| (element-wise division) .We can also remove the sum over a. Lets start by denoting b⃗def = (Ã^(ud)^**Ã^(lr))_:,M-1. Writing again the elements of z⃗: z_i=(∑^N-1_a=0(Ã^*Ã⃗_a,:)⊙b⃗_a:a+N)_i =∑^M-1_j=0Ã^*_i,j∑^N-1_a=0Ã^(ud)_a,jb⃗_a+i=∑^M-1_j=0Ã^*_i,j(b⃗⋆Ã^(ud))_i,jwhere ⋆ is a 2D cross-correlation. Finally, the correction can get written simply as:z⃗_t+1=(Ã_t^* ⊙ ((Ã_t^(ud)^**Ã_t^(lr))_:,M-1⋆Ã_t^(ud)))1⃗therefore, after each iteration t = 1...T, we getÃ_t+1=diag{e^iz⃗_t+1/|z⃗_t+1|}Ã_t, where the exponential and division operations are element-wise.Since building this correction diagonal matrix is memory inefficient, we correct each q-th column of Ã_t with the element-wise multiplication: e^iz⃗_t+1/|z⃗_t+1|⊙Ã_t_:,q.After all iterations, to restore the object approximation Ô_eff=diag{R} we then calculate:Ô_eff_i=R_i,i=ℱ^†R̃ℱ_i,i=ℱ^†ÃÃ^†ℱ_i,i=ℱ^†Ãℱℱ^†Ã^†ℱ_i,i=∑^N-1_m=0|ℱ^†Ãℱ|^2_i,mOverall, the object can be written as: Õ⃗_eff =|iFFT2(Ã)|^21⃗, which is the sum over the columns of |Â|^2. We conclude that the improvement in the memory-efficient I-CLASS algorithm now has memory complexity O(MN) instead of O(N^2), an improvement of N/M, which in our experiments is ∼ 10^4. §.§ Regularized Fourier Reweighting In the incoherent imaging scenario mentioned in the main text, non-unitary distortion and amplitude modulation cause haze in the phase-corrected image. Thus, in estimating the Modulation Transfer Function (MTF), the absolute value of the Optical Transfer Function (OTF) is essential. In the preceding discussion, and as demonstrated in Eq. <ref>, we observe that P̃_det is structured as a diagonal matrix containing on its diagonalP̃_det_i,i=MTF(k_i)e^iϕ_det(k_i), while Õ_eff takes the form of a convolution matrix. This allows us to express the elements of R̃ as follows: R̃_i,j = ∑_a,bP̃_det_i,aÕ_eff_a,bP̃^†_det_b,j=∑_a,bP̃_det_i,iδ_i,aÕ⃗_eff_a-bP̃^*_det_j,jδ_b,jthis formulation leads to the following expression for the diagonal elements: R̃_i,i = Õ⃗(0) P̃_det_i,iP̃^*_det_i,i∝ |P̃_det_i,i|^2Consequently, this relationship enables the estimation of the MTF up to a scaling factor, by: MTF≡√(diag{R̃}). Similarly to the approach outlined in Eq. <ref>, the MTF can be directly computed from Ã, bypassing the need for explicit calculation of R̃. This is achieved by summing over the columns of |Ã|^2.Overall, the k-space amplitude correction is estimated by taking the amplitude correction MTF as input for a regularized Fourier reweighting process, where each Fourier component of O⃗_CLASS, denoted asÕ⃗_CLASS, is divided by a factor given by:Õ⃗_I-CLASS_i≡Õ⃗_CLASS_i/MTF_i/max_qMTF_q +σwhere σ is the regularization parameter.Although one can utilize various deconvolution methods, such as Wiener or Richardson-Lucy, our empirical observations have consistently shown that regularized deconvolution yields superior results. The impact of different regularization parameters is showcased through I-CLASS corrections in Fig. <ref>. Fig. <ref>A features the CLASS correction, similar to taking the regularization parameter σ to ∞. Fig.<ref>B displays the result of an excessively high regularization parameter, leading to haze in the image. Fig.<ref>C demonstrates an optimal balance between noise reduction and the preservation of high-frequency details. Lastly, Fig. <ref>D presents the effect of setting the regularization parameter too low, resulting in the inclusion of noisy high-frequency bands. In summary, the I-CLASS algorithm proceeds as follows: initially, it employs the memory-efficient version of the CTR-CLASS algorithm, followed by estimating the MTF from the diagonal of the Fourier-transformed reflection matrix and finally applying deconvolution with the estimated MTF.§ NUMBER OF ILLUMINATIONS REQUIRED FOR IMAGE RECONSTRUCTION To assess the effect of random illuminations on reconstruction quality, we performed a series of reconstructions of the measurements presented in Fig.2 of the main text, using various numbers of illumination subsets. The results are given in Supplementary Fig. <ref>, where we display the reconstruction outcomes for subsets from 80 to 150 images. The effectiveness of our reconstruction technique is evaluated by comparing the maximum normalized cross-correlation with the virtual confocal measurement without the scattering layer (Fig. <ref>A). Consistent with prior CTR-CLASS reconstructions <cit.>, it is observed that when the compression ratio (CR) is above some threshold, neither the image contrast nor the spatial resolution of the reconstructed image was significantly diminished by the reduction of CR (Fig. <ref>B-C). And, when the reconstructions fall below a certain threshold (Fig. <ref>D), the reconstruction process struggles to accurately identify the correct aberration map, leading to an object-image that resembles a more focused image, as previously shown in <cit.>. § RESOLUTION IMPROVEMENT OF I-CLASS As mentioned in the main text, our method's lateral resolution can exceed that of a conventional widefield fluorescence microscope's diffraction-limited resolution. This is numerically demonstrated in Supplementary Fig.<ref>. This improvement in resolution is analogous to advancements seen in dynamic speckle illumination microscopy <cit.> and speckle-based SOFI <cit.>. In Fig.<ref>, we clearly illustrate this improvement. In Fig.<ref>A, we present an uncorrected virtual-confocal image of a simulated USAF resolution test target. Fig. <ref>B, displays the I-CLASS corrected image. A zoomed-in section of element 1, with a vertical cross-section, highlights the improvement in resolution, which cannot be resolved at the widefield reference image shown in Fig. <ref>C. | http://arxiv.org/abs/2312.16065v1 | {
"authors": [
"Gil Weinberg",
"Elad Sunray",
"Ori Katz"
],
"categories": [
"physics.optics",
"physics.app-ph"
],
"primary_category": "physics.optics",
"published": "20231226143407",
"title": "Noninvasive megapixel fluorescence microscopy through scattering layers by a virtual reflection-matrix"
} |
unsrt Journal ofClass Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals0000–0000/00$00.00 2021 IEEEA Fractal-based Complex Belief Entropy forUncertainty Measure in Complex Evidence Theory Keming Wu, Fuyuan Xiao, Senior Member, IEEECorresponding author: Fuyuan Xiao. K. Wu and F. Xiao are with the School of Big Data and Software Engineering, Chongqing University 401331, China (e-mail: [email protected])========================================================================================================================================================================================================================================== Complex evidence theory, as a generalized D-S evidence theory, has attracted academic attention because it can well express uncertainty by means of complex basic belief assignment (CBBA), and realize uncertainty reasoning by complex combination rule. However, the uncertainty measurement in complex evidence theory is still an open issue. In order to make better decisions, a complex pignistic belief transformation (CPBT) method has been proposed to assign CBBAs of multi-element focal elements to subsets. The essence of CPBT is the redistribution of complex mass function by means of the concept of fractal. In this paper, based on fractal theory, experimentalsimulation and analysis have been carried out on the generation process of CPBT in time dimension. Then, a new fractal-based complex belief (FCB) entropy is proposed to measure the uncertainty of CBBA. Finally, the properties of FCB entropy are analyzed, and several examples are used to verify its effectiveness.Complex Evidence Theory, Complex Basic Belief Assignment, Complex Pignistic Belief Transformation, Uncertainty Measurement, Fractal-based Complex Belief Entropy. § INTRODUCTIONFractal widely exists in nature and was first proposed to measure the length of the coast. The fractal theory established today is a very popular and active new theory and discipline. With people's attention to fractal theory, it has been widely used in many different fields. The most basic feature of fractal theory is to study objective things with fractal dimension, which provides a new perspective for the analysis of complex information systems <cit.>. In addition, the self-similarity of fractal also has the prospect of application in information analysis.Because of the need for accurate decision-making, uncertainty measurement is an essential issue in information analysis theory. In order to solve this problem, many relevant theories have been proposed, such as Z-number <cit.>, intuitionistic fuzzy set <cit.>, rough set <cit.>, random permutation set <cit.>, Dempster-Shafer evidence theory (DSET), etc <cit.>. Owing to its advantages in probability distribution of uncertain information, DSET and its branches <cit.> are widely used in many fields, such as decision-making <cit.>, pattern recognition <cit.>, multi-source information fusion <cit.>, classification decision <cit.>, reasoning <cit.>, fault diagnosis <cit.>, etc <cit.>.DSET is a generalization of probability theory. It assigns the probability to the power set of the element to represent the corresponding information, and uses the mass function to represent the probability of the power set, namely basic belief assignment (BBA).How to understand and measure uncertainty has always been a crucial issue in DSET.BBA is not only affected by the probability of the event itself, but also by the uncertainty of the set allocation. Therefore, the uncertainty of DEST usually consists of two parts: discord and non-specificity. Discord represents the conflict between different elements in the framework, and non-specificity represents the uncertainty generated during the distribution of BBA. At present, there are two main types of BBA uncertainty. Firstly, based on Shannon entropy and other ideas, some methods use entropy <cit.> to measure uncertainty, such as Hohle entropy <cit.>, weighted Hartley entropy <cit.>, Pal et al.’s entropy <cit.>, Jousselme et al.’s entropy <cit.>, the entropy of Jiroušek and Shenoy <cit.>, Pan et al.’s entropy <cit.>, Deng’s entropy <cit.>, etc <cit.>. Comparatively, in order to avoid the difference between DSET and probability theory due to the discernment of framework, the non-entropy method uses belief function and likelihood function to directly define on the framework, such as the uncertainty measure of Song and Wang <cit.>, the distance-based total uncertainty measures of Yang and Han <cit.>, the total uncertainty measure of interval-valued belief structures <cit.>, etc <cit.>.However, the mass function of DSET is defined in the real number field, which leads to limitations when considering the fluctuations of data at a specific phase. DSET cannot handle the uncertain information in the complex field well. Therefore, the methods mentioned above cannot describe the information in the complex field well. To solve this issue, the complex evidence theory (CET), also known as generalized DSET, was proposed by Xiao <cit.>. Different from DSET, complex basic belief assignment (CBBA) in CET can well express information in the complex field, and complex evidence combination rules can be well applied to the uncertainty reasoning based on the complex numbers <cit.>. Specifically, CET has shown great potential and advantages for information modeling in the quantum field <cit.>.In short, CET provides a new perspective for uncertain information reasoning, and one of the most noteworthy issues in CET is the uncertainty measure of CBBA.Whether DSET or CET, the ultimate goal is to obtain the probability of mutually exclusive single elements in the framework. In classical DSET, several probability transformation methods, such as pignisitic probability transformation (PPT) <cit.>, plausibility transformation method (PTM) <cit.> and other probability transformation methods <cit.> are proposed. In CET, to build a connection between CBBA and probability, a complex pignistic belief transformation (CPBT) <cit.> is proposed to convert complex mass assignments in CET to probability distribution. However, these methods only give the final results, and do not propose specific production process. Therefore, the evaluation of these methods is not comprehensive.Recently, the research on the relationship of fractal theory and evidence theory has attracted academic attention. Inspired by <cit.>, a CPBT generation process based on fractal idea is proposed, and on the basis of this process, afractal-based complex belief (FCB) entropy is proposed to measure the total uncertainty of CBBA. Then, the feasibility of FCB entropy is verified by theoretical analysis and mathematical proof, and the performance of FCB entropy is demonstrated by concrete examples. The remainder of the paper is structured as follows. In Section <ref>, some basic concepts of DSET and CET are briefly reviewed, and then fractal theory and some common entropies are briefly introduced. In Section <ref>, a CPBT generation process based on fractal idea is proposed firstly, and then FCB entropy is proposed based on the above process. Then the properties of FCB entropy are analyzed and proved in Section <ref>. Then several examples are used to verify the effectiveness of FCB entropy in Section <ref>. Section <ref> is focused on conclusions and future prospects. § PRELIMINARIESIn Section 2, the basic theories of DSET and CET are briefly reviewed, and then fractal theory and some common entropies in evidence theory are introduced. §.§ Dempster-Shafer evidence theorydefinitionDefinition[section] exaExample[section] proProperty[section] axiAxiom[section] lemLemma[section] casCase prooProof[section](Frame of Discernment). Let a set be exhaustible and the elements in it are mutually exclusive, which can be denoted as Θ ={e_1,e_2,⋯ ,e_n}. Then the set Θ is called a frame of discernment (FoD). The power set of Θ is denoted as 2^Θ={∅ ,{e_1},{e_2},⋯{e_1,e_2},⋯ ,Θ}, where ∅ is an empty set. (Mass Function). Let m be a mss fuction in FoD Θ ={e_1,e_2,⋯ ,e_n}, which is a mapping:m:2^Θ→ℝ satisfyingm(∅ )=0, ∑_A_i∈2^Θm(A_i)=1.If m(A_i)>0, then A_i is called a focal element. (Pignistic Probability Transformation). Given a FoD Θ ={e_1,e_2,⋯ ,e_n} composed of n elements and the corresponding BBA m, its pignistic probability transformation (PPT) is defined as: BetP(e_i)=∑_e_i∈A_iandA_i∈2^Θm(A_i)/| A_i|,where | A_i| represents the cardinality of A_i.§.§ Complex evidence theoryDSET is an important evidential reasoning theory for dealing with uncertain information. It not only allocates the probability to a single element, but also to a set of multiple elements, which contains more information than the traditional probability distribution. Complex evidence theory is a generalization of DSET and has the ability to process complex information. (Complex Mass Function). Let 𝕄 be a complex mass function (CMF) in FoD Θ ={e_1,e_2,⋯ ,e_n}, which is a mapping: 𝕄:2^Θ→ℂ satisfying 𝕄(∅ )=0, 𝕄(A_i)=𝐦(A_i)e^iθ (A_i),A_i∈2^Θ,which satisfies ∑_A_i∈2^Θ𝕄(A_i)=1, where √(i)=-1 and 𝐦(A_i)∈ [0,1] is denoted as the magnitude of CMF 𝕄(A_i). θ (A_i)∈ [-2π ,2π ] is denoted as the phase of CMF 𝕄(A_i).Through Euler formula, CMF can be converted as follows: 𝕄(A_i)=x+yi,A_i∈2^Θ, | 𝕄(A_i) |=𝐦(A_i)=√(x^2+y^2).If | 𝕄(A_i) |>0, A_i is called a focal element in CMF. (Commitment Degree). The commitment degree ℂom(A_i) represent the support degree for A_i, which is expressed as ℂom(A_i)=| 𝕄(A_i) |/∑_B_i∈2^Θ| 𝕄(B_i) |. (Complex Pignistic Belief Transformation). Complex pignistic belief transformation (CPBT) is used to assign CBBAs of multi-element focal elements to subsets, which is defined as <cit.> CBet(A_i)=∑_A_i⊆B_i∈2^Θ𝕄(B_i)/| B_i|, where | B_i| represents the cardinality of B_i and | A_i∩B_i| represents the cardinality of the intersection of A_i and B_i. (Interference Effect). Interference effect refers to the residual term generated when calculating the added complex modulus, which is defined as B⊆ΘIE(B)=∑_A_i⊆ B,A_j⊆ B,A_iA_j2𝐦(A_i)𝐦(A_j)cos (θ (A_i)-θ (A_j))=| ∑_A_i⊆ B𝕄(A_i)|^2-∑_A_i⊆ B𝐦^2(A_i), (CBBA Exponential Negation). Given a FoD Θ ={e_1, e_2, ⋯, e _n} and the power set of Θ is 2^Θ={∅ ,{e_1},{e_2},⋯ ,{e_1,e_2},⋯ ,Θ}. Let a CBBA 𝕄 be defined on 2^Θ. Then the CBBA exponential negation is defined by 𝕄(A_i)=∑_BΘ ,B_i∈2^Θe^-𝕄(B_i)·| A_i∩B_i|/∑_A_i∅ ,A_i∈2^Θ∑_B_iΘ ,B_i∈2^Θe^-𝕄(B_i)·| A_i∩B_i|,A∅, where B_i=Θ -B_i and | A_i∩B_i| represent the cardinality of set A_i∩B_i.§.§ EntropyThe concept of entropy originated from the physical thermodynamic system and gradually extended to the field of information to measure the amount of information. Some commonly used entropies are reviewed below. (Shannon Entropy). Given the probability distribution of all events in a framework P={p_1,⋯ ,p_n}, then the Shannon entropy is defined as below: E_s(P)=-∑_i=1^np_ilogp_i. (Weighted Hartley Entropy). Weighted Hartley entropy is extended from Hartley information formula, which is defined as follows:E_H(m)=∑_A_i∈2^Θm(A_i)log| A_i|. (Pal et al.’s Entropy). Given a mass function m in an n-element FoD Θ ={e_1,e_2,⋯ , e _n}, then the Pal et al.’s entropy is defined as below: E_p(m)=-∑_A_i∈2^Θm(A_i)log (m(A_i)/| A_i|). (Deng Entropy). Given a mass function m in an n-element FoD Θ ={e_1,e_2,⋯ , e _n}, then the Deng entropy is defined as below: E_d(m)=-∑_A_i∈2^Θm(A_i)logm(A_i)/2^| A_i|-1,(Zhou et al.s’ Entropy). Given a mass function m in an n-element FoD Θ ={e_1,e_2,⋯ , e _n}, then the Zhou et al.s’ entropy is defined as below: E_z(m)=-∑_A_i∈2^Θm(A_i)log (m(A_i)/2^| A |-1e^| A |-1/| X |). (Cui et al.s’ Entropy). Given a mass function m in an n-element FoD Θ ={e_1,e_2,⋯ , e _n}, then the Cui et al.s’ entropy <cit.> is defined as below:E_c(m)=-∑_A_i∈2^Θm(A_i)log( m(A_i)/2^| A_i|-1e^∑_[ B_i∈2^Θ;B_iA_i ]| A_i∩B_i|/2^| Θ|-1). (FB Entropy). Given a mass function m in an n-element FoD Θ ={e_1,e_2,⋯ , e _n}, then the FB entropy is defined as below: E_FB(m)=-∑_A_i∈2^Θm_F(A_i)logm_F(A_i),where m_F(A_i) is mass function after fractal. And m_F(A_i) is defined as:m_F(A_i)=m(A_i)/2^| A_i|-1+∑_A_i⊂A_jm(A_j)/2^| A_j|-1, § FRACTAL-BASED COMPLEX BELIEF ENTROPYIn this section, based on the idea of fractal, a generation process of CPBT is proposed, which can well reveal the distribution of CBBAs of multi-element focal elements. Then FCB entropy is proposed based on the generation process of CPBT, and the maximum entropy model of FCB is deduced. §.§ Theoretical analysis of CPBT from fractal perspectiveCompared with the traditional probability theory, DSET and CET can express more information because the mass function is assigned to all power sets in FoD. However, in the real world, the probability of a certain event is usually expressed by the probability theory of Bayesian distribution. Therefore, transforming the mass function of evidence theory into probability is a crucial issue. In DSET, PPT is widely used in practical problems as an important method to convert BBA into probability. In <cit.>, a fractal-based PPT generation process is proposed, which provides a new perspective for describing the relationship between BBA and probability. In CET, the research on the relationship between CBBA and probability has not been widely carried out. CPBT is a common method to convert CBBAs of multi-element focal elements into singletons. Then ℂom is used to play the role of decision instead of probability in CET <cit.>. Inspired by <cit.>, this paper proposes a fractal generation process of CPBT in order to fill the gap in the research of this direction in CET. For the 2-element FoD X={x_1,x_2}, the generation process of CPBT can be understood as that the CBBA containing two elements is gradually allocated to a single element in n iterations. When n tends to infinity, the CBBA containing two elements gradually tends to zero, while theCBBA of singletons gradually tends to be stable, and finally the information contained in it tends to be constant. A specific example is given below to understand the generation process of CPBT.Given a FoD X={x_1,x_2}, and the CBBA defined in it is shown as below: 𝕄(x_1)=0.1414e^iarctan(-1.0000)=0.1-0.1i, 𝕄(x_1,x_2)=0.9055e^iarctan(0.1111)=0.9+0.1i. The CBBA after n times of splitting is shown as follows: 𝕄^n(x_1)=𝕄^n-1(x_1)+1/p𝕄^n-1(x_1,x_2), 𝕄^n(x_2)=𝕄^n-1(x_2)+1/p𝕄^n-1(x_1,x_2), 𝕄^n(x_1,x_2)=(1-2/p)𝕄^n-1(x_1,x_2), where p≥ 3, 𝕄^n(x_i) means n times splitting of 𝕄(x_i). p represents the size of 𝕄(x_1,x_2) allocated to 𝕄(x_1) and 𝕄(x_2), which can also be understood as the allocation speed of 𝕄(x_1,x_2) per unit time. When p takes different values, the CBBA after n iterations are different. For the iterated CBBA,ℂom(A_i) is used to show its support for the element A_i, similar to BBA. The results are shown in Figure <ref>.It is not difficult to understand the proposed CPBT generation process from Example <ref>. For FoD with n elements, give a CBBA 𝕄. The production process of CPBT based on fractal is that the CBBA value of multi-element focal elements are evenly distributed to single element focal elements in a certain proportion in the process of time change, gradually reducing the influence of non-specificity until finally disappearing. Finally, the CBBA of Bayesian distribution is obtained.An important feature of fractal is self-similarity. The proposed CPBT generation process is a self-similar iterative process. The fractal diagram of Example 1 is shown in Figure <ref>.§.§ Fractal-based complex belief entropyBased on the above CPBT generation process, a new belief entropy calledfractal-based complex belief (FCB) entropy is proposed to measure the uncertainty of CBBA. In the above two examples, when the parameters take different values, the size of multi-element focal element allocation is different. In FCB entropy, in order to better reflect the rationality of allocation, it is specified that the focal element is evenly allocated to each of its elements. The calculation diagram of FCB entropy is shown in Figure <ref>. (FCBBA). Given a FoD Θ ={e_1,e_2,⋯ , e _n} and its corresponding CBBA 𝕄, For any A_k belonging to 2^Θ, its CBBA after fractal is defined by 𝕄_F(A_k)=𝕄(A_k)/2^| A_k|-1+∑_A_k⊆B_k∩| A_k|<| B_k|𝕄(B_k)/2^| B_k|-1.The new set 2_FB^Θ composed of 𝕄_F(A_k)=𝕄(A_k)/2^| A_k|-1+∑_A_k⊆B_k∩| A_k|<| B_k|𝕄(B_k)/2^| B_k|-1 is calledfractal-based complex basic belief assignment (FCBBA). (FCB entropy). Given a FoD Θ ={e_1,e_2,⋯ , e _n} and its corresponding FCBBA 𝕄_F(A_k)=𝕄(A_k)/2^| A_k|-1+∑_A_k⊆B_k∩| A_k|<| B_k|𝕄(B_k)/2^| B_k|-1, then the FCB entropy is defined by: 𝔼_FCB(𝕄) =-∑_A_k∈2^Θℂom_F(A_k)log (ℂom_F(A_k)),where ℂom_F(A_k) represents the support degree for A_k in 2_FB^Θ and ℂom_F(A_k) in the FCB entropy is defined byℂom_F(A_k)=| 𝕄_F(A_k) |/∑_B∈Θ| 𝕄_F(B_k) |,where ∑_B∈Θ| 𝕄_F(B_k) | is used to normalize ℂom_F(A_k), which ensures that the value of ℂom_F(A_k) is between [0,1]. So ℂom_F(A_k) also satisfies the following equation∑_A_k∈2^Θℂom_F(A_k)=1.When substituting (<ref>) into (<ref>), (<ref>) can be transformed into another form,𝔼_FCB(𝕄)=-∑_A_k∈2^Θ| 𝕄_F(A_k) |/∑_B∈Θ| 𝕄_F(B_k) |log( | 𝕄_F(A_k) |/∑_B∈Θ| 𝕄_F(B_k) |).The formula for calculating the modulus of the focal element inℂom_F(A_k) is defined as follows:| 𝕄_F(A_k) |=| 𝕄(A_k)/2^| A_k|-1+∑_A_k⊆B_k∩| A_k|<| B_k|𝕄(B_k)/2^| B_k|-1|=√(∑_C_k∈B_k| 𝕄(C_k)/2^| C_k|-1|^2+∑_A_k⊆B_k∩| A_k|<| B_k|IE(B_k))where IE(B_k) is the interference function in CET and in FCB entropy it is defined byB_k⊆ΘIE(B_k)=∑_X_i⊆B_k,X_j⊆B_k,X_iX_j2𝐦(X_i)𝐦(X_j)cos (θ (X_i)-θ (X_j))=| ∑_X_i⊆B_k𝕄(X_i)|^2-∑_X_i⊆B_k𝐦^2(X_i).𝕄(X_i) and 𝕄(X_j) have the same form as CBBA in CET𝕄(X_i)=u+vi.θ (X_i) can be calculated by the following formulaθ( X_i)={arctan( v/u), u>0 π/2, u=0,v>0 -π/2, u=0,v<0 arctan( v/u)+π , u<0,v≥ 0 arctan( v/u)-π , u<0,v<0 .After Euler transformation, 𝕄(X_i) and 𝕄(X_j) in IE(B_k) can be expressed as𝕄(X_i)=𝐦(X_i)e^iθ (X_i), 𝕄(X_j)=𝐦(X_j)e^iθ (X_j),In the complex plane, the vector sum of 𝕄(X_i) and 𝕄(X_j) can be expressed by 𝕄(X_i)+𝕄(X_j)=𝐦(X_i)e^iθ (X_i)+𝐦(X_j)e^iθ (X_j).It can be calculated from cosine theorem that| 𝐦(X_i)e^iθ (X_i)+𝐦(X_j)e^iθ (X_j)|^2=| 𝐦(X_i)e^iθ (X_i)|^2 +| 𝐦(X_j)e^iθ (X_j)|^2+IE(X),where IE(X)=2𝐦(X_i)𝐦(X_j)cos (θ (X_i)-θ (X_j)). Through analysis, it can be seen that when| X(θ_i)-X(θ_j) |∈ [0,π/2)∪ (3π/2,2π ], that is cos (X(θ_i)-X(θ_j))>0, IE(X)>0, which results in positive interference effect; when| X(θ_i)-X(θ_j) |∈ (π/2,3π/2), that is cos (X(θ_i)-X(θ_j))<0, IE(X)<0, which results in negative interference effect; when| X(θ_i)-X(θ_j) |=π/2 or | X(θ_i)-X(θ_j) |=3π/2, that is cos (X(θ_i)-X(θ_j))=0, IE(X)=0, which results in no interference effect. The results of the analysis are shown in the three subgraphs in Figure <ref> below. The existence of IE(B_k) will have different impacts on ℂom_F(A_k), mainly in the following three cases.Case 1. For | B_k|>| A_k|, when 𝕄(B_k) is evenly distributed to the set 𝕄(A_k), its complex value and the required complex value tend to the same direction, resulting in a greater degree of support for A_k than the normal state, making ℂom_F(A_k) larger.Case 2. For | B_ k|<| A_k|, when 𝕄(B_k) is evenly distributed to the set M(A_k), its complex value deviates from the direction of the required complex value of 𝕄(A_k), resulting in less support than the normal state, making ℂom_F(A_k) smaller.Case 3. For | B_k|=| A_k|, when 𝕄(B_k) is evenly distributed to the set 𝕄(A_k), its complex value is consistent with the direction of the required complex value of 𝕄(A_k), resulting in a support level equal to the normal state, which has no impact on ℂom_F(A_k). When CBBA degenerates to BBA, FCB entropy degenerates to FB entropy, that is, In DSET, FCB entropy and FB entropy are equivalent. When CBBA degenerates to BBA, the following formula can be obtained:𝐦_F(A_k)=𝐦(A_k)/2^| A_k|-1+∑_A_k⊆B_k∩| A_k|<| B_k|𝐦(B_k)/2^| B_k|-1.Because BBA is defined in the real number field, there is no interference effect, namely, ∑_A_k⊆B_k∩| A_k|<| B_k|IE(B_k)=0.| 𝐦_F(A_k) |=𝐦_F(A_k), since the value range of BBA is [0,1],ℂom_F(A_k)=| 𝐦_F(A_k) |/∑_B∈Θ| 𝐦_F(B_k) |=𝐦_F(A_k).Then simplify the FCB entropy to obtain𝔼_FCB(m)=E_FB(m) =-∑_A_k∈2^Θℂom_F(A_k)log (ℂom_F(A_k))=-∑_A_k∈2^Θm_F(A_k)log (m_F(A_k)). Therefore, FCB entropy can be regarded as the generalization of FB entropy, which has better ability to process complex information than FB entropy. §.§ Maximum fractal-based complex belief entropyThe maximum entropy principle is a criterion for selecting the distribution of random variables that best conforms to the objective situation, also known as the maximum information principle. Generally, only one distribution has the maximum entropy. Choosing this distribution with maximum entropy as the distribution of the random variable is an effective criterion for decision analysis. Therefore, the derivation of the maximum FCB entropy model is very important. The definition of maximum FCB entropy is as follows. (Maximum FCB entropy). Given a FoD Θ ={e_1,e_2,⋯ , e _n} and the CBBA 𝕄 in it, the maximum FCB entropy is 𝔼_FCB^max(𝕄)=log (2^| Θ|-1)=log (2^n-1),when ℂom_F(A_k)=1/2^| Θ|-1. First, FCB entropy is expressed as a function f_FCB(A_1, ⋯ ,A_2^| Θ|-1)=-∑_A_k∈2^Θℂom_F(A_k)log (ℂom_F(A_k)),according to (<ref>), that is, ∑_A_k∈2^Θℂom_F(A_k)=1. Therefore, the Lagrange function can be constructed as 𝔽_FCB(A_1,A_2,⋯ ,A_2^| Θ|-1,λ ) = f_FCB(A_1,A_2,⋯ ,A_2^| Θ|-1)+λ (∑_A_k∈2^Θℂom_F(A_k)-1),then calculate the first partial derivative of𝔽_FCB(A_1,A_2,⋯ ,A_2^| Θ|-1). For all ℂom_F(A_k),log (ℂom_F(A_k))=λ -1/ln a=k,then according to (<ref>),∑_A_k∈2^Θℂom_F(A_k)=(2^| Θ|-1)ℂom_F(A_k)=1,so when ℂom_F(A_k)=1/(2^| Θ|-1), FCB entropy reaches its maximum,𝔼_CFB^max(𝕄)=-∑_A_k∈2^Θ1/(2^| Θ|-1)log1/(2^| Θ|-1)=log (2^| Θ|-1). FB entropy is a generalization of Shannon entropy. Its maximum entropy model is similar to Shannon entropy, and can also deal with decision-making problems in the objective world. As the generalization of FB entropy, the maximum entropy model of FCB entropy has the physical meaning of FB entropy and has more advantages. § THE PROPERTIES OF THE PROPOSED FCB ENTROPYFor the total uncertainty measurement of BBA, different methods generally have 10 properties, which are often used for the analysis of uncertain methods. For the analysis of FCB entropy, the 10 properties have been extended to CET to measure the feasibility and applicability of FCB entropy. The properties of some uncertain methods are summarized in Table <ref>.[Probabilistic consistency] When all focal elements are singletons, the distribution of mass function is similar to Bayesian distribution, and the total uncertainty measurement should be degenerated to Shannon entropy.For FCB entropy, ∀| A_i|>1,𝕄(A_i)=0,| 𝕄_F(A_i) |=| 𝕄(A_i)/2^| A_i|-1|.Case 1. when CBBA degenerates BBA and according to Axiom <ref>, FCB entropy is degenerated to FB entropy. And the BBA in FoD Θ ={e_1,e_2,⋯ , e _n} satisfies ∑_i=1^nm(e_i)=1, substitute it into (<ref>):𝔼_FCB(m) =-∑_i=1^nm(e_i)log (m(e_i))=E_FB(m)=E_s(m).It is obvious that it satisfies Property <ref>.Case 2. When FCB entropy is expressed in the form of CBBA, its probability consistency cannot be judged in the usual form. Expand it to use ℂom(A_i) to express the probability of focal element A_i. The CBBA in FoD Θ ={e_1,e_2,⋯ , e _n} satisfies ∑_i=1^n𝕄(e_i)=1 and ∑_i=1^nℂom_F(e_i)=1, then substitute it into (<ref>): 𝔼_FCB(𝕄) =-∑_i=1^nℂom_F(e_i)log (ℂom_F(e_i))=-∑_i=1^nℂom(e_i)log (ℂom(e_i)).In CET, ℂom(e_i) represents the degree of support for a focal element e_i, which is similar to the role of probability in probability theory. The simplified FCB entropy has the same form as Shannon entropy, and is similar to the information it expresses. The generalized probability consistency of FCB entropy in the complex number field can also be satisfied. [Setcorrelation] The number of elements in the focal element will also affect the information modeling when the complex mass function is allocated. The traditional set consistency is controversial, but it provides a new perspective for the study of the set-related properties of FCB entropy. Explanation: Hartley's information formula measures the amount of information in probability theory, while FCB entropy measures the uncertainty in FoD. In the case of the same dimension, evidence theory can express more information than probability theory. For a FoD with n elements, the framework of evidence theory contains 2^n focal elements, while probability theory only contains n elements. Therefore, it is reasonable that the maximum value of FCB entropy is greater than the maximum value of Shannon entropy. For some commonly used entropies, their maximum entropies are summarized in Table <ref>. The maximum value of different entropy changes with the number of elements as shown in Figure <ref>. [Range] Given a FoD Θ ={e_1,e_2,⋯ , e _n}, the range of the total uncertainty measurements should be in [0,log| Θ|].When there exists a elemnt in the FoD satisfying 𝕄(e_i)=1, the minimum of FCB entropy is 𝔼_FCB^min(𝕄)=0. In Definition <ref>, the maximum of FCB entropy is defined as 𝔼_FCB^max(𝕄)=log (2^| Θ|-1)=log (2^n-1). Therefore, the range of FCB entropy is [0, log (2^n-1)]. So FCB entropy doesn’t satisfy Property <ref>. [Monotonicity] Monotonicity means that in evidence theory, when information is significantly reduced (uncertainty increases), the measurement method of total uncertainty should not reduce the uncertainty.Specifically, in DSET let two CBBAs 𝕄_1 and 𝕄_2 be defined on a FoD Θ ={e_1,e_2,⋯ , e _n}. For any focal element P_i⊆Θ satisfyingGBel_1(P_i)≥ GBel_2(P_i),it must satisfy the following relationship:𝔼_FCB(𝕄_1)≤𝔼_FCB(𝕄_2).Negation is a good way to analyze problems in evidence theory. Considering events from the negative side, some problems that cannot be concluded from the positive side can be dealt with. As the number of negative iterations increases, the amount of information in evidence theory decreases and the uncertainty increases. In this paper, the negative idea is used to prove the monotonicity of FCB entropy.Case 1. When CBBA is defined on the complex plane. Compared with the traditional entropy model, FCB entropy is used to measure the uncertainty of CBBA, and can deal with information in complex form. The monotonicity is verified by negative thinking. In CET, a new CBBA exponential negation method was proposed <cit.>. It provides a method to verify the monotonicity of FCB entropy. Consider a FoD Θ ={e_1,e _2}, select the initial value that is easy to calculate. And after 10 negative iterations, the results of CBBA in CET changing with the number of negations are shown in Table <ref>. The calculation results of FCB entropy are shown in Table <ref>. It can be seen from Table <ref> that FCB entropy increases with the number of negation. So when FCB entropy is used on CBBA, it satisfies monotonicity. Case 2. According to Axiom <ref>, as a generalization of FB entropy, when CBBA degenerates to BBA, FCB entropy degenerates into FB entropy. Luo proposed a negative method of BBA. The negative direction is the direction of neglect. Similar to the process in Case 1. Consider a FoD Θ ={e_1,e _2}, select the initial value that is easy to calculate. And after 10 negative iterations, the results of BBA in DSET changing with the number of negations are shown in the Table <ref>. The calculation results of entropies are shown in Table <ref>. In the meantime, the change trend of FCB entropy and other previous entropies in evidence theroy with the number of iterations are shown in Figure <ref>. According to Figure <ref>, it’s clear to see that FB entropy <cit.> and FCB entropy increase with the number of iterations increasing, while Deng entropy <cit.>, Pal et al.s' entropy <cit.>, Zhou et al.'s entropy <cit.> and Cui et al.s' entropy <cit.> first increase and then decrease with the number of iterations increasing. That is, FB entropy and FCB entropy meet the requirements of monotonicity, while Deng entropy, Pal et al.s' entropy, Zhou et al.'s entropy and Cui et al.s' entropy does not meet the requirements of monotonicity. Combining the above two cases, it can be verified that FCB entropy satisfies monotonicity. [Additivity] Given two independent FoD Γ and Υ, two CBBAs are defined on the two FoDs, respectively. Let Ψ =Γ×Υ be a joint FoD. Then the total uncertainty measurement should satisfy 𝔼_FCB(𝕄_Ψ)=𝔼_FCB(𝕄_Γ)+𝔼_FCB(𝕄_Υ).For BBA, there are two different definitions of joint BBA depending on whether m(∅ )=0. In this paper, assuming that CBBA is normalized, the framework of joint CBBA is defined as Γ×Υ =(2^| Γ|-1)(2^| Υ|-1).According to the above definition, the number of complex quality functions of Ψ is less than its power set. For FCB entropy, the average distribution of multifocal elements to power sets should be changed to the number of subsets assigned to this frame.Joint CBBA 𝕄^Ψ and joint FCBBA 𝕄_F^Ψ are defined by𝕄^Ψ(ψ_ij)=𝕄(τ_i)×𝕄(γ_j),𝕄^Ψ(ψ_ijψ_im)=𝕄(τ_i)×𝕄(γ_jγ_m),𝕄^Ψ(ψ_ijψ_imψ_njψ_nm)=𝕄(τ_iτ_n)×𝕄(γ_jγ_m),∀A_i∈2^| Ψ|,𝕄_F^Ψ(A_i)=𝕄^Ψ(A_i)+∑_A_i⊆D_i;B_i×C_i=D_i𝕄^Ψ(D_i)/S( B_i,C_i),where S( B_i,C_i)=(2^| B_i|-1)(2^| C_i|-1).Given ψ∈2^| Ψ|, τ∈2^| Γ| and γ∈2^| Υ|, and the relationship of the three FoDs is Ψ =Γ×Υ. According to Definition <ref>, the following equation is derived 𝕄_F^Ψ(ψ ) =𝕄(τ )×𝕄(γ )+∑_A_i⊆D_i;B_i×C_i=D_i𝕄(B_i)×𝕄(C_i)/S( B_i,C_i) =𝕄_F^Γ(τ )×𝕄_F^Υ(γ )then calculate the modulus of M_F^Ψ(ψ ) to obtain the following equation| 𝕄_F^Ψ(ψ ) |=| 𝕄_F^Γ(τ ) |×| 𝕄_F^Υ(γ ) |,next, normalize the obtained FCBBA modulusℂom_F(ψ )=| 𝕄_F^Γ(τ ) |×| 𝕄_F^Υ(γ ) |/∑_τ_i∈2^Γ;γ_i∈2^Υ| 𝕄_F^Γ(τ_i) |×| 𝕄_F^Υ(γ_i) |. It is proved that the joint CBBA and FCBBA are continuous in the framework. In addition, it is also known that Shannon entropy is additive, and FCB entropy is similar to Shannon entropy. It is easy to prove that FCB entropy is additive. An example is given below to verify that FCB entropy satisfies additivity. Given two independent FoD X={x_1,x_2} and Y={y_1,y_2}, two CBBAs are defined on the two FoDs, respectively. Let Z=X× Y be a joint FoD. And the complex mass functions are 𝕄^X: 𝕄^X(x_1)=0.2+0.1i,𝕄^X(x_2)=0.5+0.1i,𝕄^X(x_1,x_2)=0.3-0.2i, 𝕄^Y: 𝕄^Y(y_1)=0.3+0.2i,𝕄^Y(y_2)=0.2+0.1i,𝕄^Y(y_1,y_2)=0.5-0.3i. Using the normalized ℂom_F to calculate the FCB entropy, it can be obtained that 𝔼_FCB(𝕄_Z)=2.8317=𝔼_FCB(𝕄_X)+𝔼_FCB(𝕄_Y),which indicates that FCB entropy satisfies additivity. [Subadditivity] Given two independent FoDs Γ and Υ, two CBBAs are defined on the two FoDs, respectively. Let Ψ =Γ×Υ be a joint FoD. Then the total uncertainty measurement should satisfy 𝔼_FCB(𝕄_Ψ)≤𝔼_FCB(𝕄_Γ)+𝔼_FCB(𝕄_Υ). Whether the two FoDs Γ and Υ are independent will have an impact on the entropy of the combined FoD Ψ, which is mainly divided into the following two cases. Case 1. If CBBAs in Γ and Υ are independent of each other, it can be known from Property 5 that 𝔼_FCB(𝕄_Ψ)=𝔼_FCB(𝕄_Γ)+𝔼_FCB(𝕄_Υ).Case 2. If CBBAs in Γ and Υ are not independent of each other, when the two CBBAs are jointed, there will be some overlapping information, resulting in reduced uncertainty, namely𝔼_FCB(𝕄_Ψ)<𝔼_FCB(𝕄_Γ)+𝔼_FCB(𝕄_Υ).Combining the above two cases, it can be concluded that FCB entropy satisfies the subadditivity. [Time complexity]Let a CBBA 𝕄 be defined on a FoD Θ ={e_1,e_2,⋯ , e _n} which contains n elements. The measurement of FCB entropy can be divided into four steps. The total time complexity of each step and the algorithm is summarized in Table <ref>. According to Table <ref>, it can be seen that the computational complexity of FCB entropy is similar to that of FB entropy <cit.> and JS entropy <cit.>, and the computational complexity of some methods is higher than that of FCB entropy. Therefore, the computational complexity of FCB entropy is still within an acceptable range.[Discord and non-specificity] For a total uncertain measurement method, it should be able to be divided into two parts to measure discord and non-specificity. (FCB entropy’s discord). Given a FoD Θ ={e_1,e_2,⋯ , e _n} and a CBBA 𝕄 is defined in it, then the FCB entropy’s discord part is defined by 𝔼_FCB^𝒟(𝕄)=-∑_i=1^nℂomCBet(e_i)log( ℂomCBet(e_i) ),in whichℂomCBet(e_i)=| CBet(e_i) |/∑_i=1^n| CBet(e_i) |,where ∑_i=1^n| CBet(e_i) | is used to normalize | CBet(e_i) | and ensures the range of ℂomCBet(e_i) is in [0,1].(FCB entropy’s non-specificity). Given a FoD Θ ={e_1, e_2, ⋯,e _n} and a CBBA 𝕄 is defined in it. Then the FCB entropy’s non-specificity part is defined by 𝔼_FCB^𝒩(𝕄)= 𝔼_FCB(𝕄)-𝔼_FCB^𝒟(𝕄) = ∑_i=1^nℂomCBet(e_i)log( ℂomCBet(e_i) )-∑_A_i∈2^Θℂom_F(A_i)log (ℂom_F(A_i))Among the existing methods, there are entropies that can be directly decomposed into discord and non-specificity from the formula, such as Deng entropy and JS entropy. However, similar to FB entropy, the measurement of FCB entropy on discord and non-specificity is not directly obtained from the formula. FCB entropy originates from the production process of CPBT. When CBBA is converted to probability through CPBT, there is only uncertainty caused by discord in the framework. Therefore, the definitions of discord and non-specificity in FCB entropy are as above.In Definition <ref>, 𝔼_FCB^𝒟(𝕄) represents the uncertainty caused by singletons, excluding the uncertainty caused by set allocation in evidence theory. So it is reasonable to represent the discord in FCB entropy. Total uncertainty minus the uncertainty caused by discord is the uncertainty caused by non-specificity. Therefore, FCB entropy satisfies Property <ref>.An example is given below to show the relationship between the discord and non-specificity of FCB entropy. Given a FoD Θ ={e_1,e_2}, let a CBBA 𝕄 be defined in the FoD. There are also two variable parameters x and y in the defined CBBA. The CBBA is given as below 𝕄(e_1)=(1-x)-yi, 𝕄(e_1,e_2)=x+yi, satisfying √(x^2+y^2)∈ [0,1] and √((1-x)^2+(-y)^2)∈ [0,1].Then according to Definition <ref>, the FCBBA can be obtained 𝕄_F(e_1)=𝕄(e_1)+𝕄(e_1,e_2)/3=3-2x/3-2y/3i, 𝕄_F(e_2)=𝕄(e_1,e_2)/3=x/3+y/3i, 𝕄_F(e_1,e_2)=𝕄(e_1,e_2)/3=x/3+y/3i. With the change of the unknown number x and y, the relationship between overall FCB entropy and discord and non-specificity is shown in Figure <ref> . § EXPERIMENTSThe generation of FCBBA includes the process of assigning multi-element focal elements to its subsets, reflecting the impact of the intersection of different events on the system uncertainty measurement. Therefore, FCB entropy can be used to reflect the uncertainty in CBBA caused by the intersection of different events. Next, several specific examples are used to illustrate.The purpose of the question is to find the target element. Suppose there are two different sources of evidence, that is, two CBBAs are defined on a same FoD X={x_1,x_2,x_3,x_4}, the results of the two CBBAs are given as below. 𝕄_1:𝕄_1({x_1,x_2})=0.4+0.2i,𝕄_1({x_3,x_4})=0.6-0.2i, 𝕄_2:𝕄_2({x_1,x_3})=0.4+0.2i,𝕄_1({x_2,x_3})=0.6-0.2i. It can be seen from the two CBBAs that the four elements in the first evidence source may be the target elements, and the corresponding elements in the two complex mass functions given are different from each other. In the second evidence source, there are only three elements as possible target elements, and there is a same element x_3 in the two complex mass functions given. Therefore, it is intuitively judged that the conflict degree of 𝕄_1 is higher than that of 𝕄_2, that is, the uncertainty of 𝕄_2 should be less than that of 𝕄_1.In order to compare the effect of FCB entropy and the previously proposed entropies on multi-element intersection, the previous entropies are extended to CBBA using Definition <ref>. The calculation results of each entropy are shown in Table <ref>. From Table <ref>, it is obvious that Pal et al.'s entropy, Deng entropy, Zhou et al. s'entropy did not take into account the change in the amount of information caused by the intersection of focal elements, so the entropy of the two evidence sources is the same, resulting in information loss. Only the uncertainty of FCB entropy measurement meets the requirement that 𝕄_1 is greater than 𝕄_2. Logically,𝕄_1 should contain more information because the uncertainty is greater than 𝕄_2. Therefore, the newly proposed FCB entropy is more effective in reducing information loss.Here, another example is given to verify its effectiveness in measuring uncertain information.In this example, there are three different CBBAs based on different FoDs, that is, the number of elements is different. The three CBBA values are as follows 𝕄_1: 𝕄_1({x_1,x_2})=0.2+0.1i,𝕄_1({x_3,x_4})=0.6+0.2i, 𝕄_1({x_5,x_6})=0.2-0.3i, 𝕄_2: 𝕄_2({x_1,x_2})=0.2+0.1i,𝕄_2({x_2,x_3})=0.6+0.2i, 𝕄_2({x_3,x_6})=0.2-0.3i, 𝕄_3: 𝕄_3({x_1,x_2})=0.2+0.1i,𝕄_3({x_2,x_3})=0.6+0.2i, 𝕄_3({x_5,x_6})=0.2-0.3i. Intuitively, the uncertainty of evidence source 1 should be the largest, and that of evidence source 2 should be the smallest. The calculation results of several different entropy of three different evidence sources are shown in Figure <ref> and Table <ref>. From Figure <ref> and Table <ref>, it can be seen that Pal et al.'s entropy, Deng entropy and Zhou et al. s'entropy cannot reflect the change in the amount of information caused by the intersection of focal elements. Cui et al. s'entropy and the proposed FCB entropy can reflect this feature, and the change trend of FCB entropy is more obvious than Cui et al. s'entropy, so FCB entropy is more effective and has a good description ability for uncertain information. Given a FoD X={x_1,x_2,⋯ ,x_10} and a CBBA 𝕄 defined on the FoD is given as follows 𝕄(x_1)=α+0.3i, 𝕄(X_i)=1-α-0.3i. The complex mass assignment in 𝕄 varies with the number of elements in X_i ,which is shown in Table <ref>. In addition, the complex mass assignment in 𝕄 varies with α. Next, set the value of α to 0, 0.1. In the case of taking two different values of a, the results of FCB entropy, Deng entropy <cit.>, Zhou et al.’s entropy <cit.> changing with the change of i are shown in Figures <ref>, <ref>. From the two figures, it can be seen that when i=1, all the mass is allocated to the single element focal element x_1. From the logical judgment, x_1 is certain, and the uncertainty should be 0. However, only the FCB entropy of the three entropy is 0, which is consistent with objective cognition. As i increases, FCB entropy is basically consistent with Deng entropy. That is, FCB entropy has the performance of measuring uncertainty similar to Deng entropy. And with the increase of α, the mass distribution of x_1 increases and the uncertainty decreases. The above three kinds of entropy can well reflect this phenomenon. To sum up, FCB entropy has advantages in measuring uncertainty. § CONCLUSIONSAs a generalization of D-S evidence theory, complex evidence theory has played an important role in many fields. However, the measurement of uncertainty is still an open issue. Many studies have proposed different measurement methods, such as belief entropy, divergence, etc. As a new subject, fractal theory has attracted many scholars' interest. In order to link CBBA with probability, a CPBT method is proposed to convert CBBA into probability. However, CPBT only gives the result of conversion without specific process. Therefore, the relationship between CBBA and probability cannot be comprehensively understood. Inspired by fractal theory, this paper proposes a fractal generation process of CPBT. Then, based on the generation process of CPBT, a new basic belief assignment called FCBBA is defined, and on this basis, FCB entropy is proposed to measure the uncertainty of CBBA. The discord and non-specificity are also defined. Then the properties of FCB entropy are analyzed, and several examples are used to verify its effectiveness. In the future, the application of FCB entropy in pattern recognition, evidence fusion and other fields will be analyzed and realized.§ ACKNOWLEDGMENTSThis research is supported by the National Natural Science Foundation of China (No. 62003280), Chongqing Talents: Exceptional Young Talents Project (No. cstc2022ycjh-bgzxm0070), Natural Science Foundation of Chongqing, China (No. CSTB2022NSCQ-MSX0531), and Chongqing Overseas Scholars Innovation Program (No. cx2022024). | http://arxiv.org/abs/2312.16080v1 | {
"authors": [
"Keming Wu",
"Fuyuan Xiao"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231226145857",
"title": "A Fractal-based Complex Belief Entropy for Uncertainty Measure in Complex Evidence Theory"
} |
APS/123-QED Institute of Magnetism of the NAS of Ukraine and MES of Ukraine, 03142 Kyiv, Ukraine Institute of Magnetism of the NAS of Ukraine and MES of Ukraine, 03142 Kyiv, Ukraine Nanostructure Physics, Royal Institute of Technology, 10691 Stockholm, Sweden Institute of Magnetism of the NAS of Ukraine and MES of Ukraine, 03142 Kyiv, Ukraine Nanostructure Physics, Royal Institute of Technology, 10691 Stockholm, Sweden We investigate the switching of a magnetic nanoparticle comprising the middle free layer of a memory cell based on a double magnetic tunnel junction under the combined effect of spin-polarized current and weak on-chip magnetic field. We obtain the timing and amplitude parameters for the current and field pulses needed to achieve 100 ps range inertia-free switching under least-power dissipation. The considered method does not rely on the stochastics of thermal agitation of the magnetic nanoparticle typically accompanying spin-torque switching. The regime of ultimate switching speed-efficiency found in this work is promising for applications in high-performance nonvolatile memory. Ultrafast inertia-free switching of double magnetic tunnel junctions V. Korenivski====================================================================Slonczewski and Berger <cit.> proposed a mechanism of excitation and switching in magnetic nanostructures based on spin-transfer-torques (STT), which was demonstrated experimentally <cit.> and found wide use in magnetic random access memory (MRAM). STT-based MRAM has become a successful technology, offering fast, non-volatile, radiation and thermally hard (can operate at up to 400C) data storage in various industrial applications such as mobile, automotive, military, space <cit.>. More recently, MRAM based on double magnetic tunnel junctions (DMTJ) was developed to enhance STT by spin injection from both top and bottom magnetic junctions <cit.>. The two fixed reference layers in a DMTJ are oriented antiparallel, such that the combined STT is a sum of the individual-MTJ spin-torques. The two junctions are made to have different resistance for readout efficiency (higher magneto-resistance, MR). An alternative design, most promising today, the Double Spin-torque MTJ (DSMTJ) has one of the oxide junctions replaced with a nonmagnetic metallic spacer, which does not contribute to the total resistance and hence does not dilute the MR. The analysis presented in this paper is directly relevant to both DMTJ and DSMTJ MRAM.Worledge <cit.> recently developed a single-domain model for switching of a double magnetic tunnel junction (DMTJ) magnetized perpendicular to the plane, which is a very promising implementation of MRAM since it significantly increases the strength of available STT. The obtained analytical result for the threshold switching current showed a reduction by up to 10-fold compared with single-MTJ based cells.In this work, we theoretically investigate the important technological aspect of how to achieve ultimate memory writing speed while expending least amount of energy; more specifically, how to most efficiently combine STT and word/bit-line Oe-fields available on-chip. The role of a weak Oe-field added to strong STT is to achieve a minor magnetization non-collinearity in the DMTJ stack, which significantly improves the STT efficiency during switching[We note that a minor magnetization non-collinearity between the fixed and free layers of the DTMJ (DSMTJ) can in practice be achieved by setting the exchange-pinning direction in the fixed layers slightly off the x axis. Such implementation would not require the assisting Oe-field pulse and the writing of the memory cell would be done solely by STT.]. We will show that an optimal combination of STT and Oe-field pulses can be determined analytically for given device parameters and leads to inertia-free switching in the 200 ps range. We consider a DMTJ-based MRAM cell, where the current through the in-plane magnetized junction, spin-polarized by the magnetically co-linear and antiparallel fixed magnetic bottom and top layers, exerts a spin-torque on the middle free layer, which can be arbitrary angled in the plane by a field from the word or bit line of the memory array, as illustrated in Fig. <ref>.We use the Landau-Lifshitz-Slonczewski equation <cit.>, which takes into account the spin transfer torque (STT) of the spin-polarized current through the stack. In order to simplify the theoretical consideration while focusing on the main idea of the paper, we will neglect the dissipation in the system since it leads to an insignificant quantitative modification of the final results for the considered process. We then haved *m/dt = -γ[*m*H_eff] -γ 4π M_sκ[m×[m×μ]],where *m is the unit magnetization vector of the free layer, which is the functional element of the system, *H_eff is the effective magnetic field acting on the magnetic moment of the free layer, γ=2μ_B/ħ is the gyromagnetic ratio, κ=ħ· V· P/R_⊥·2e·4π M_s^2· S· d is a normalised coefficient that determines the intensity of STT, V is the potential difference between the bit and word lines, R_⊥ is the total electrical resistance of DMTJ, P is the polarization coefficient of the current through the free layer, *M_s is the saturation magnetization of the free layer, and *μ is the unit magnetization vector of the bottom fixed layer. In turn, S and d are the area and thickness of the free layer, respectively.As in <cit.>, we assume that the free layer has the shape of a thin ellipse with a small in-plane eccentricity, which determines its shape anisotropy. The strong out-of-plane demagnetization and weak in-plane anisotropy confines the magnetization to be strongly in-plane and along the long semi-axis of the ellipse at equilibrium.Formally, this system is similar to the one considered in <cit.>, with the difference being perpendicular orientation of the magnetic moments in the previous work, unlike the in-plane orientation in our case. However, the angular dependence of the STT coefficient κ in both systems is similar. In the case of small spin-polarization coefficient, P ≪ 1, the angular dependence of κ is manifest only in the terms proportional to P^3. Hence, with good accuracy, κ can be taken as angle-independent[In practice, the first nonlinear term (P^3) is small for all P ≲ 0.5, which is the common MTJ polarization range in STT-MRAM<cit.>].It should be noted that a single fixed magnetic layer (bottom or top) is in principle sufficient to control the magnetization direction of the free layer. STT due to spin-polarized electrons transmitted from the free to fixed layer favors the antiparallel state of the MTJ, while the reversed current favors the parallel state. Thus, bi-directional switching is possible, although sub-optimal vis-avi DMTJ, as we detail below.The three-layer DMTJ structure illustrated in Fig. <ref> has a significant operational advantage since the magnetization switching process is symmetric due to the mutual compensation of the dipolar stray fields form the bottom and top fixed magnetic layers. In addition, as in the system considered in <cit.>, the two STT contributions add up and enhance the effective torque acting on the magnetic moment of the free layer. In what follows, we therefore use coefficient P as taking into account the STT contributions from both junctions of the DMTJ and neglect the stray fields from the outer fixed layers.For this system to operate as a memory cell with a resistive readout, it is sufficient that the (magneto-) resistances of the two tunnel junctions are different.The flat shape of the free layer results in strong easy-plane magnetic anisotropy, so we can take |m_z| ≪ 1. Accurate to linear terms in m_z, the unit magnetisation vector of the free layer is then*m=*M/*M_s=(cos(ϕ), sin(ϕ), m_z);with that of the bottom fixed layertaken as*μ=(-1, 0, 0).The effective field, *H^i_eff, which takes into account demagnetization and the external magnetic field, is given byH^x_eff=-4π M_sN_xcos(ϕ)+H_x H^y_eff=-4π M_sN_ysin(ϕ)+H_y H^z_eff=-4π M_sN_zm_z After substituting (2-4), we obtain the Landau-Lifshitz equation in new variables, which, after certain simplifications, take the form:dϕ/dτ=-m_z+κsin(ϕ) dm_z/dτ=(N_y-N_x) sin(ϕ) cos(ϕ) -h_ycos(ϕ)+h_xsin(ϕ),where τ=t·ω_0, ω_0=8 π M_sμ_B/ħ, μ_B is Bohr's magneton,h_y=H_y/4π M_s, h_x=H_x/4π M_s.In (5) it was taken into account that | m_z|, N_y, N_x, | h_y|, | h_x|, κ≪ 1, N_z=1-N_y-N_x≈1, and terms no higher than linear in the small parameter were retained. This pair of equations describing the behaviour of the magnetization of a free layer is equivalent to a single second-order equation for the angular variable:ϕ̈+(N_y-N_x) sin(ϕ) cos(ϕ)-κ̇sin(ϕ)-κcos(ϕ)ϕ̇-h_ycos(ϕ)+h_xsin(ϕ)=0. Hereinafter, a point above the physical quantities will be used to denote the time derivative: ϕ̇=dϕ/dτ, κ̇=dκ/dτ... Equation (6) is used below as the basis for developing a theory of controlling the magnetic states of a DMTJ memory cell, with a focus on ultra-high speed and relaxation-free switching.Let us first consider the case when there is no external field h_y=h_x=0, but the magnetic moment of the free layer is subjected to a spin-polarized current. We assume that the current pulse has a rectangular time dependence κ(τ)=κ_0Θ(τ+Tω_0/2)·Θ(Tω_0/2-τ), where Θ(τ) is the Heaviside function and Tω_0/2 is a dimensionless coefficient that includes the pulse duration T. The rectangular shape of the pulse means that the rise time of the current front is much shorter than the characteristic response time of the system to the field and current excitation. We will determine the exact value of the characteristic time later.It is obvious that such time dependence results in the spin polarization coefficient changing as κ̇=κ_0(δ(τ+Tω_0/2)-δ(τ-Tω_0/2)), where δ is the Dirac delta function. Thus, the solution of equation (6) can be decomposed into three sequential dependencies: ϕ_I in the interval (-∞,-Tω_0/2), ϕ_II in [-Tω_0/2,Tω_0/2] and ϕ_III in (Tω_0/2,+∞).The boundary conditions are determined by integrating equation (6) in the vicinity of points τ_1=-Tω_0/2 and τ_2=Tω_0/2. Therefore, taking into account h=0, we have:ϕ̈_I+(N_y-N_x)sin(ϕ_I)cos(ϕ_I)=0, withτ∈(-∞,-Tω_0/2);ϕ̈_II+(N_y-N_x)sin(ϕ_II)cos(ϕ_II)-κ_0cos(ϕ_II)ϕ̇_II=0,with τ∈[-Tω_0/2,Tω_0/2];ϕ̈_III+(N_y-N_x)sin(ϕ_III)cos(ϕ_III)=0,with τ∈(Tω_0/2,+∞). The boundary conditions areϕ_I=ϕ_II;ϕ̇_II-ϕ̇_I=κ_0sin(ϕ_I),with τ=τ_1=-Tω_0/2 ϕ_II=ϕ_III; ϕ̇_III-ϕ̇_II=-κ_0sin(ϕ_II),with τ=τ_2=Tω_0/2,If we assume that before the current was switched on, the magnetic moment of the free layer was in state of equilibrium, then the following boundary condition will be fulfilled at τ=τ_1=-Tω_0/2: ϕ_I=ϕ_II=0, ϕ̇_II=0. Thus, the problem describing the perturbation of the magnetization of the free layer under the excitation of the spin-polarized current pulse is as follows:ϕ̈_II+(N_y-N_x) sin(ϕ_II) cos(ϕ_II)-κ_0cos(ϕ_II)ϕ̇_II=0, ϕ_II=0; ϕ̇_I=0for τ=-Tω_0/2. Solving this system shows that ϕ_II(τ)=0. Similarly, ϕ_III(τ)≡0. This means that in a collinear system, when the magnetic moments of the pinned and free layers are parallel, a spin-polarised current does not affect the magnetization unless there are additional factors that violate the collinearity condition. Thermal fluctuations of the magnetisation vector can be one of these factors, but in this case, the control process is randomized and results in rather significant parameter spreads for writing and reading out information.In order to avoid random processes, we consider a memory cell design where a weak magnetic field pulse h_y is added to the write sequence. This breaks the collinearity of the magnetisation of the free and pinned layers. The source of such a field can typically be the flat current bus passing above/below a given memory cell in the Ox/Oy direction (word/bit line of the memory array). Based on Maxwell's equation H=4πj/c), it is easy to show that the current flowing in, say, the word-line induces a magnetic field H_y, which at a distance much smaller than the line width is equal to(in SGS units) H_y=2π I_x/cL, or h_y=I_x/2cM_sL, where c is the speed of light, L is the word-line width, and I_x is the current flowing through the line in the Ox direction.The word-line magnetic field pulse is taken to be time aligned with the pulse of the spin-polarized current flowing through the cell, as shown schematically in Fig. <ref>. Similarly to the previous case, we assume that at τ<-Tω_0/2 the system was at its equilibrium, ϕ_1=0. In turn, in the time interval τ∈[-Tω_0/2,Tω_0/2], the magnetic moment of the free layer is affected by the field-current excitation such that angle ϕ_II can be found fromϕ̈_II+(N_y-N_x) sin(ϕ_II) cos(ϕ_II)-κ_0cos(ϕ_II)ϕ̇_II=h_ycos(ϕ_II). The limiting values at the moment τ_1 of switching-on the excitation are equal to ϕ_II=0, ϕ̇_II=0, but the presence of the right-hand side in (10) excludes the trivial solution, so ϕ_II≠0. Thus, ϕ_II will be changing in time, and after switching off the current and field at τ_2, ϕ will continue to change described byϕ̈_III+(N_y-N_x) sin(ϕ_III) cos(ϕ_III)=0. Variables ϕ_II and ϕ_III at time τ_2=Tω_0/2 are linked by the following boundary conditions:ϕ_II=ϕ_III|_τ=τ_2; ϕ̇_III-ϕ̇_II=-κ_0sin(ϕ_III)|_τ=τ_2. We will show that with a proper choice of the field excitation configuration, h_y(τ) and pulse duration T, it is possible to achieve a regime of high-speed and essentially inertia-free switching of the free layer's magnetization from state ϕ_I=0 to state ϕ_III≈π.To do this, we assume that the configuration of the current pulse I_x and the corresponding time dependence of the magnetic field h_y (Fig. 2) are determined by a function, which can be approximated byh_y=h_y0/ch(ντ),where ν is the coefficient determining the degree of temporal localisation of the magnetic field pulse, h_y0 is the magnetic field amplitude.The only requirement imposed on this parameter (ν) is that ν Tω_0/2≫1. That is, at the boundary of the time interval [-Tω_0/2,Tω_0/2], the induced magnetic field is negligible. Therefore, the solution of equation (10), which describes the dynamics of the magnetization of the free layer, will be sought in the formϕ_II(τ)=2 arctan(expντ). After substituting (14) into (10), the following identity is obtained:(-ν^2+κ_0ν-(N_y-N_x)+h_y0)shντ/ch^2ντ=0.Relationship (15) defines the characteristic equationν^2-κ_0ν-h_y0+(N_y-N_x)=0,from which coefficients ν are found, allowing non-trivial solution (14) of equation (10).It follows from (16) that ν can have two values:ν_±=κ_0/2±√((κ_0/2)^2+h_y0-(N_y-N_x)). This result shows that switching can proceed via two regimes: fast, when the time parameter ν=ν_+ and slow, when ν=ν_-. In the special case where (κ_0/2)^2+h_y0-(N_y-N_x)=0, both processes become identical.Analysing (17) makes it obvious that, for the magnetization reversal process to occur, the roots of ν_± must have real values. This means that the contributions from the field and the spin-polarized current must exceed a certain threshold value determined by the shape anisotropy of the free layer:(κ_0/2)^2+h_y0≥ N_y-N_x. At the same time, the magnitude of the magnetic field alone should not be sufficient to switch the free layer, since otherwise the induced magnetic field will lead to switching of all cells on the same word line. Therefore, the following conditions must be fulfilled to guarantee that only one selected cell is switched:h_y0<N_y-N_x<h_y0+(κ_0/2)^2.The main role of the magnetic field h_y in this process is to bring the free layer's magnetization out of the collinear state within the memory stack.As mentioned earlier, the process can proceed via fast or slow mode. The highest speed is achieved when h_y0→N_y-N_x and ν_+→κ_0. Therefore, this value corresponds to the upper limit of the magnetisation switching rate, and its magnitude is determined only by the amplitude of the spin-polarized current.In turn, value τ_0=1/ν≈1/κ_0 can be considered as the characteristic duration of the system's magnetization reversal process. Therefore, a “rectangular pulse” of the spin-polarized current means in practice that its rise time should be much shorter than τ_0.In order for expression (14) to be considered a solution to (10), (11), describing inertia-free switching of the magnetization of the free layer, it must satisfy the boundary conditions with high accuracy:ϕ_II=0, ϕ̇_II=0,when τ=-Tω_0/2, ϕ_II=π, ϕ̇_II=0,when τ=Tω_0/2.Here it is taken that ϕ_I=0, ϕ_III=π.Substituting (10) into the expressions for the boundary conditions (20) and taking into account the above discussed condition, (ν Tω_0/2≫ 1), we obtainϕ_II(τ_1)=2 arctan(exp-ν T ω_0/2)→0, ϕ_II(τ_2)=2 arctan(expν T ω_0/2)→π, ϕ̇_II (τ_1)=ϕ̇_II (τ_2)=ν / cosh(ν Tω_0/2)→0. Thus, function (10), when the above conditions are met, describes with great accuracy the process of high-speed and almost inertia-free switching of the magnetization of the free layer from state ϕ_I=0 to state ϕ_III=π. In this regime, the post-switching magnetization oscillations are negligible.It should be noted that the write process can be done in parallel on cells along the same word line in order to speed up the overall data storage speed. This would be in addition to the speed up of sequential writing of the same memory bit permitted by the relaxation-free (negligible post-switching oscillations) regime proposed herein.We now estimate the duration of the spin-polarized current pulse, taking into account the parametric limitations discussed above. With ν Tω_0/2=10 ≫ 1 and ν≈κ_0 = 0.2 ≪ 1, we obtain T=2·10/0.2 ·γ· 4π M_s≈ 250 ps, for M_s = 1.7 · 10^3 G. The switching process considered in this work thus significantly speeds up the memory device in terms of single and multiple-sequential write operations. In conclusion, we present a method for ultra-fast inertia-free switching of a magnetic memory cell based on a double magnetic tunnel junction. The optimal time sequences and amplitudes of the field and spin-current pulses are determined for operation in the 100 ps regime, which should be attractive for high-performance MRAM. Support from the National Academy of Sciences of Ukraine, the Swedish Research Council (VR 2018-03526), the Olle Engkvist Foundation (project 2020-207-0460), the Wenner-Gren Foundation (grant GFU2022-0011), and the Swedish Strategic Research Council (SSF UKR22-0050) are gratefully acknowledged. | http://arxiv.org/abs/2312.16540v1 | {
"authors": [
"Yu. Dzhezherya",
"P. Polynchuk",
"A. Kravets",
"V. Korenivski"
],
"categories": [
"cond-mat.mes-hall",
"cond-mat.other"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231227114604",
"title": "Ultrafast inertia-free switching of double magnetic tunnel junctions"
} |
=10000 =-0.4in =-0.3in =8.8in =7in decorations.pathreplacing,calligraphy backgrounds patterns arrows.metabag = [align=center] decorations.pathmorphingℒϕ̃ | http://arxiv.org/abs/2312.16298v1 | {
"authors": [
"Alfredo Guevara",
"Yangrui Hu"
],
"categories": [
"hep-th",
"quant-ph"
],
"primary_category": "hep-th",
"published": "20231226190702",
"title": "Celestial Quantum Error Correction I: Qubits from Noncommutative Klein Space"
} |
My Publication Title — Multiple Authors First Author Name1,2, Second Author Name2, Third Author Name1Received ...; accepted... ====================================================================================[*Equal Contributions]Leveraging knowledge from multiple tasks through introducing a small number of task specific parameters into each transformer layer, also known as adapters, receives much attention recently. However, adding an extra fusion layer to implement knowledge composition not only increases the inference time but also is non-scalable for some applications. To avoid these issues, we propose a two-stage knowledge distillation algorithm called AdapterDistillation. In the first stage, we extract task specific knowledge by using local data to train a student adapter. In the second stage, we distill the knowledge from the existing teacher adapters into the student adapter to help its inference. Extensive experiments on frequentlyasked question retrieval in task-oriented dialog systems validate the efficiency of AdapterDistillation. We show that AdapterDistillationoutperforms existing algorithms in terms of accuracy, resource consumption and inference time. § INTRODUCTION Recently task-oriented dialogue systems have found extensive applications in diverse business domains <cit.>. Owing to the idiosyncratic features of these domains, custom dialogue systems are often required. Nonetheless, the fundamental functions and architectures underlying these systems typically exhibit noteworthy similarities. Hence, the adoption of a platform-based strategy for accommodating task-oriented dialogue systems across multiple domains has emerged as a promising and effective approach.One popular method is called Multi-Task Learning (MTL), which aims to train multiple tasks simultaneously based on the shared representation of all tasks as shown in Figure <ref>, resulting in relatively good performance on each task <cit.>. However, new tenants often register on the platform in a streaming manner. Therefore, predictions for the existing tenants in MTL would be compromised when new tenants are added to the platform since retraining often occurs at that time. To ensure that tenants do not interfere with each other, an intuitive approach is to train a task-specific model for each tenant. However, this independent training approach requires a significant amount of resources to store complete model parameters. It is clear that the resource consumption becomes the bottleneck as many tenants register on the platform. Additionally, fine-tuning all parameters on a tenant with very little data can often lead to severe overfitting <cit.>. Thus, providing tenants with the ability to solve designated tasks with limited resources is necessary.Owing to the distinctive properties of platform-based systems, tasks implemented on the platform had better satisfy the following criteria: 1) The platform witnesses a continuous influx of new tenants. It is incumbent upon the model to ensure the performance of the existing tenants are not destructed when new tenants are added.2) The resources of the platform are limited, thereby necessitating the provision of tenants with the capability to ensure the task performance of each tenant with minimal storage and computational resources. Given this practical limitation, for an incoming tenant, how to utilize the current tenant data and the existing tenant models (also called teacher models) becomes an interesting topic.In order to maintain the low-resource and scalability of the model while utilizing the existing tenant knowledge, we propose an algorithm called AdapterDistillation. In AdapterDistillation, we employ adapters to capture task specific features by adding a few extra parameters in the transformer layer, and then distillate knowledge from the existing teacher adapters into the current student adapter. To be exact, when a new tenant comes, we first train an adapter module based on its own local data. Then we load the adapter modules of all the current teacher adapters to assist the training of this new student tenant through knowledge distillation. Our contributions:The proposed approach has several advantages:1) Fusion is only added during the second stage of training to guide the current student adapter learning and is not required during inference, ensuring structure consistency between the student adapter and the existing teacher adapters. 2) Since the adapter structure is consistent during prediction and no additional parameters are required, the scalability and low-resource nature of the model itself are retained. To summarize, our contributions are:* We formulate the construction of a platform-based multi-task problem as a transfer learning problem and leverage the low-resourced adapter model to handle this.* Wepropose an AdapterDistillation algorithm which guarantees low-resources and scalability while utilizing the existing tenant knowledge to improve performance.* We verify noteworthy enhancements of the proposed AdapterDistillation algorithm in terms of accuracy, resource consumption and inference time, using a Frequently Asked Question (FAQ) retrieval service in task-oriented dialog systems.§ RELEVANT WORK Recently, adapters have been proposed to capture task-specific features while maintaining similar results to fine-tuning all parameters, which has been widely applied to downstream tasks such as machine translation and cross-lingual transfer <cit.>. Specifically, adapters insert two bottleneck modules into each transformer layer of the pre-trained model <cit.>. During training, all parameters of the pre-trained model are frozen, and only the parameters of the newly added modules are trained. Some researchers <cit.> has further improved the insertion position of adapters through structural search, reducing the number of adapter insertions and thus minimizing the increase in parameter quantity and inference speed. A new type of adapters called Lora <cit.> has been proposed to first perform low-rank decomposition on the model parameters and then insert adapters into the key, query, and value matrices of each attention layer. This approach enhances performance and enables parallel execution of the adapter module, thus reducing inference time. Due to the lack of clarity regarding the inter-dependencies and key success factors of various adapter methods, He et al. <cit.> dissected the design of several classic adapter algorithms and established a unified framework to explore the connections between different adapter methods. Note that all of the work discussed in this paragraph is complimentary to the proposed method called AdapterDistillation since our algorithm is not limited to a certain type of adapters.Thus we can combine our developed approach with all of the work discussed in this paragraphto obtain extra gains. In addition to optimizing the structure and position of adapters for each individual task, adding extra components on the top of multiple adapters to maximize the transfer of knowledge across multiple tasks is another efficient way to enhance the performance on each task. For example, via adding an extra fusion layer, the AdapterFusion method is proposed to effectively share knowledge across multiple tasks while balancing the various target tasks <cit.>. To be specific, this method uses a two-stage training approach: first, train the corresponding adapter for each task, then load all adapters simultaneously and freeze them, and train an additional adapter fusion layer to aggregate the outputs of all adapters, allowing the model to implicitly and automatically learn to utilize knowledge from different tasks. But this approach faces some challenges in practical applications: since the parameter size of the fusion layer is linearly related to the number of loaded adapters, when the number of adapters is too large, a lot of resources will be used for fusion such that the purpose of using adapters gets lost. Additionally, adding a fusion layer after the adapters leads to larger inference time, resulting in a worse user experience.To efficiently utilize existing task knowledge and meet the requirement of the platform for streaming task integration, we propose a plug-and-play AdapterDistillation algorithm. By fusing and distilling the knowledge of existing tasks into the current task during training, we can keep the model structure and inference speed unchanged while achieving comparable results to AdapterFusion.§ PROBLEM DEFINITIONAs we know, training adapters for N tasks in parallel might not be practical since tenants often register on the platform in a streaming manner. Based on this fact, the time for the j-th task registered on the platform is assumed to be earlier than that for the i-th task, that is t_j<t_i, when j<i for an ordered collection of N tasks denoted as T={t_1,t_2,...,t_N }. Throughout the paper, we have the following settings which are typically true in practice: * The task considered in this paper is non-destuctive, which means when theN-th task is registered on the platform, the performance of the previous (N-1) tasks {t_1,t_2,...,t_N-1} should not be impacted.* Since the platform often has limited resources, it is reasonable to assume every task needs to be solved with limited computing and memory resources.* Due to privacy and security issues,corpus of labeled text fortheN-th task is only available locally.Based on the above setting, in this paper we are aimed at maximizing the transfer of knowledge from the existing tasks to the current new task without impacting the existing tasks, which is more suitable for a practical scenario where each task is registered on the platform in a streaming manner.§ ADAPTERDISTILLATIONAdapterFusion allows sharing of information between different tasks through an extra fusion layer at the cost of longer inference time and larger fusion layer <cit.>. However, as a new task is registered on the platform, the existing tasks will be impacted in. In order to mitigate this,we propose AdapterDistillation to allow sharing of information from the existing tasks to the new one while avoiding the impact of the existing tasks without increasing inference time.§.§ Adapter Learning and Distillation AlgorithmThe proposed AdapterDistillation algorithm is a two-stage learning algorithm. In the first stage, we train an adapter model ϕ_N^first for the N-th new task when it is registered on the platform based on its own local data. In the second stage, we employ knowledge distillation to transfer knowledge from the existing tasks to the new adapter, which means the parameter weight of this new adapter will be updated in the second stage. To be exact, assuming there have been (N-1) adapters registered on the platform with their weights being denoted as {ϕ_n}_n=1^N-1 and the N-th adapter with its weight trained in the first stage being denoted as ϕ_N^first. With a fixed pretrained Bert-based model Θ and the existing adapters {ϕ_n}_n=1^N-1 and ϕ_N^first, the data fusion related parameters Ω and the N-th adapter weight ϕ have been introduced to learn how to distill knowledge from the existing adapters {ϕ_n}_n=1^N-1 and ϕ_N^first to better solve the N-th task. The training process can be represented asΩ_N, ϕ_N min_Ω,ϕ CrossEntropy(D_N;ϕ,Θ) +η· Distill(D_N; {ϕ_n}_n=1^N-1,ϕ_N^first,Ω,ϕ,Θ)where D_N are corpus of labeled text for task N, η is a predefined constant to balance the distillation loss andthe binary cross entropy loss, Ω_N is a set of newly learned fusion-related parameters to transfer the existing knowledge from the existing adapters to the N-th adapter for task N, and ϕ_N is the final weight for adapter N. It is worth mentioning that similar to AdapterFusion, each adapter in AdapterDistillation will be trained twice where the second stage is mainly aimed at implementing knowledge composition. However, different from AdapterFusion which keeps the fusion layer during the inference time, AdapterDistillation will only employ the N-th adapter module to do inference without adding an extra fusion layer (shown in Figure. <ref>) which leads to faster inference time without impacting the performance of the existing tasks. This makes sense since the N-th adapter weight ϕ_N already contains the sharing of information between N tasks after knowledge distillation. §.§ Detailed Components During the training process, AdapterDistillation learns to distill the knowledge from the existing (N-1) adapters to the N-th adapter by introducing the fusion weights Ω and updating the N-th adapter weights ϕ. The fusion weights Ω transfer the existing knowledge to the N-th adapter module by dynamically introducing a distillation loss term as shown in (<ref>). This will push the N-th adapter to learn knowledge not only from its own task data D_N but also from the previous (N-1) adapter intermediate representations.As shown in Figure <ref>, our AdapterDistillation architecture contains three components, that is, an adapter fusion, N teacher adapters and a N-th student adapter. In the student adapter part, the output of the feed-forward sublayer of layer l at iteration t, denoted as 𝐡_l, t, is fed into the N-th adapter to obtain the N-th adapter output 𝐳_l,t,N=g(𝐡_l,t,ϕ) with g(𝐡_l,t,ϕ) being the nonlinear transformation and ϕ being the adapter parameters to be optimized. Interestingly enough, in the N teacher adapters, we not only use the previous (N-1) fully trained adapters ϕ_n as teacher adapters to enable sharing of information between different tasks but also add the N-th partially trained adapter ϕ_N^first obtained in the first stage as a teacher adapter to insert some task specific knowledge. In other words, the output of N teacher adapters can be denoted as 𝐳_l,t,n=g(𝐡_l,t,ϕ_n) for n=1,2,…,N-1 and 𝐳_l,t,N=g(𝐡_l,t,ϕ_N^first). Similar to AdapterFusion <cit.>, our AdapterDistillation dynamically combines different adapters by introducing Query Q_l, Key K_l, and Value V_l at each transformer layer l with its complete set being Ω={Q_l,K_l,V_l}_l=1^L. We employ 𝐳_l,t,n for n=1,2,...,N as the input to the value and key transformation to obtain 𝐳_l, t, n^v =𝐳_l, t, n^⊤𝐕_l and 𝐳_l, t, n^k =𝐳_l, t, n^⊤𝐊_l, respectively. The output of the feed-forward sublayer 𝐡_l,t is used as input to the query transformation to obtain 𝐡_l, t^Q=𝐡_l, t^⊤𝐐_l. Then the output of the adapter fusion 𝐨_l, t can be obtained as 𝐨_l, t = p_l,t^T𝐙_l, t, n^vwith p_l,t=softmax(𝐡_l, t^Q ⊗𝐳_l, t, n^k) and 𝐙_l, t, n^v = [𝐳_l, t, 1^v,𝐳_l, t, 2^v,...,𝐳_l, t, N^v]. Note that we employ p_l,t to learn to weight the adapters with regard to the context. Finally, the distillation loss described in (<ref>) is defined as Distill = 𝐨_l, t-𝐳_l, t, N.It is worth mentioning that in the second stage we jointly optimize the adapter fusion Ω and ϕ so as to obtain the optimal ϕ_N which contains the most useful mixed knowledge from available adapters. Then during the inference stage, we only employ ϕ_N to implement the prediction for the N-th task without considering the adapter fusion part so as to reduce inference time. On the other hand, only employing ϕ_N can also lead to comparable performance as AdapterFusion, which will be shown next.§ EXPERIMENTSTo validate the effectiveness of AdapterDistillation in terms of accuracy, resource consumption and inference time, its performance is evaluated through a practical scenario where Frequently Asked Question (FAQ) retrieval is considered in task-oriented dialog systems. §.§ Experimental SettingTo benchmark AdapterDistillation, we compare with the following four model architectures, namely, BERT + adapters (abbr. as Adapter), fullly fine-tuning BERT model (abbr. as Full), head-only fine-tuning BERT model (abbr. as HEAD) and AdapterFusion. A detailed experimental setting can be found in Appendix <ref>.§.§ Datasets and MetricsWe select 9 existing tenant models from the platform as teacher adapters, covering fields such as medical care, transportation, insurance, shopping, photography, lease and et al, and employ the performance of the 10-th tenant (student adapter) to evaluate the considered models. In order to reduce the variance, we independently choose 8 unregistered tenants from different business domains as the 10-th student tenant. The 8 independent student tenants are from international payments, merchant payments, broadband installation, cross-border payments, merchant signing, human resources, administrative management, and IT support. The data size for each student tenant ranges from 1000 to 5000 which has beendivided into the training, validation, and test dataset with the ratio being 8:1:1. It is worth mentioning that we not only use accuracy and AUC to evaluate the performance, but also use the resource consumption and inference time as additional metrics to indicate the functionality of the models of interest for online practical applications. §.§ Performance As shown in Table <ref>, it is straightforward to see that Full fine-tuning leads to much better performance compared to training only the HEAD (12.71% accuracy increase) at the cost of adding more trainable parameters. Additionally, adapters achieve a little worse accuracy performance compared to Full fine-tuning but with only the 1.45% extra added parameters, which is promising. Table <ref> also shows that AdapterFusion can reduce the performance gap by adding an extra fusion layer to implement knowledge composition but at the cost of adding 21.36% extra parameters. Interestingly, the overall accuracy of the propose AdapterDistillation method achieves even better accuracy than AdapterFusion but with only much fewer added parameters (21.36% VS 1.45%). This makes sense since the representations from several such teacher adapters have been inserted into the student adapter through knowledge distillation, which means the fusion layer is not that important for AdapterDistillation during the inference stage. In addition to accuracy and AUC, resource consumption is an important indicator of deployment costs. In terms of storage space required during the inference stage, the pre-trained Bert-base-Chinese model takes up approximately 391MB. The fusion module and the adapter module occupies 82MB and 3.5MB, respectively. The last classification layer requires 2.3MB. As a result, in addition to the base model, it takes approximately extra 119.3MB for AdapterFusion but only takes approximately extra 5.8MB for AdapterDistillation when the 10-th tenant registers on the platform. In Table <ref>, we consider the number of tenants that can be supported by different methods. It shows that when the storage space is 100GB, AdapterDistillation can support 67 times more tenants than Full fine-tuning and 20 times more tenants than AdapterFusion. The results in Table <ref> indicate that AdapterDistillation has significant advantages on resource consumption compared to others, which becomes more pronounced as the storage space becomes larger.For online applications, inference time is closely related to the actual user experience. Next we compare inference time of three algorithms versus different batch sizes. The results in Table <ref> show that AdapterDistillation has the same inference time as the Adapter method but it is significantly better than AdapterFusion. This is reasonable because an extra fusion layer in AdapterFusion takes some extra inference time. It is worth noting that the inference time of the Adapter/AdapterDistillation method is slightly larger (about 2.5%) compared to full fine-tuning, which is caused by the serial insertion of the adapter module. Note that AdapterDistillation is independent of the structure of Adapter itself and can be hot-swapped into any Adapter-like method, such as Lora <cit.>, to maintain the same inference time as full fine-tuning through parallel insertion.In order to verify the improvement in model performance is due to the sharing of information from different tasks, we remove the current tenant from teacher adapters and train only using the existing tenants as teacher adapters. Figure <ref> indicates that compared to adding the current tenant to the Teachers, the average accuracy is just slightly decreased by about 0.11%, but still outperforms adapters by about 1.18%. This indicates AdapterDistillation can effectively use multiple resources of extracted information.§ CONCLUSIONS AND FUTURE WORKWe proposed a novel and plug-and-play multi-adapter knowledge distillation algorithm called AdapterDistillation to implement the sharing of information between different tasks. Specifically, our proposed algorithm consisted of two stages of training. In the first stage of training, task-specific knowledge was extracted by training a student adapter using local data. Then in the second stage, knowledge from the existing teacher adapters was distillated into this student adapter through optimizing the distillation loss. Note that AdapterDistillation only employed the trained student adapter to implement inference,which resulted in fast inference time and low resource consumption. Our proposed AdapterDistillation algorithm outperformed existing algorithms in terms of accuracy, resource consumption and inference time, meeting a practical scenario where numerous tenants accessing the platform in a streaming manner. In the future, plugging more advanced adapter structures into AdapterDistillation is an interesting direction to explore.acl_natbib§ APPENDICES §.§ Detailed Experimental SettingIn all experiments, we use Bert-base-Chinese[https://huggingface.co/bert-base-chinese] as the pre-training base model and set the classification threshold to be 0.5.All models are trained for 10 epochs with the same learning rate strategy as <cit.>. The distillation regularization parameter η in (<ref>) is selected from [e^-2, e^-1, e^0,e^1, e^2]. ForAdapterDistillation, we use the same parameter initialization strategy for all key, value, query matrices and the same hyper-parameters as AdapterFusion to ensure fair comparison. §.§ Cold Start and DeploymentSince some new tenants only have a knowledge base without any annotated data at the beginning, a universal pipeline for automatically building tenant datasets is proposed. The knowledge base contains many knowledge points, each of which corresponds to a standard question and multiple similar questions. Note that the collection of questions belonging to the same knowledge point has the same answer.We automatically construct datasets through the following steps:1) Download knowledge base with the ID of the newly added tenant.2) Construct positive samples based on the labeled questions and similar questions in the knowledge base.3) Constructing negative samples using the BM25 algorithm <cit.>. During the deployment of the service, we load adapter modules for all tenants on the pre-trained model. All requests from tenants on the platform will be directed to this model. During inference, the adapter module belonging to the corresponding tenant is activated based on their name, while those from other tenants are blocked. | http://arxiv.org/abs/2312.16261v1 | {
"authors": [
"Junjie Wang",
"Yicheng Chen",
"Wangshu Zhang",
"Sen Hu",
"Teng Xu",
"Jing Zheng"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20231226070100",
"title": "AdapterDistillation: Non-Destructive Task Composition with Knowledge Distillation"
} |
conditions | http://arxiv.org/abs/2312.16079v1 | {
"authors": [
"Avinash Agarwal"
],
"categories": [
"cs.NI",
"cs.SY",
"eess.SY"
],
"primary_category": "cs.NI",
"published": "20231226145824",
"title": "Coexistence assessment and interference mitigation for 5G and Fixed Satellite Stations in C-band in India"
} |
=1 ==thebibliography[1]#18.49.4 boxed figureframeqn -2 -2 figuretxt | http://arxiv.org/abs/2312.16487v1 | {
"authors": [
"Gilles Dowek",
"Murdoch J. Gabbay"
],
"categories": [
"cs.LO"
],
"primary_category": "cs.LO",
"published": "20231227092831",
"title": "Nominal semantics for predicate logic: algebras, substitution, quantifiers, and limits"
} |
Craig Hogan [email protected] 0000-0002-1433-8841]Craig HoganUniversity of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 0000-0002-0785-4346]Ohkyung KwonUniversity of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 0000-0003-3315-4332]Stephan S. Meyer University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 0000-0002-1312-6513]Nathaniel Selub University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 0000-0003-4200-7879]Frederick Wehlen University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637Causal relationships in conformal geometry are used to analyze angular boundaries of cosmic microwave background (CMB) correlations. It is shown that curvature correlations limited to timelike intervalson world lines that have connected causal diamonds during inflation generate an angular correlation function C(Θ) of gravitationally-induced CMB anisotropy that vanishes in a range of angular separation from Θ= π/2 - arcsin(1/4) to as far as Θ=3π/4. This model-independent symmetry is shown to agree remarkably well witheven-parity and dipole-corrected CMB correlations measured in all-sky maps from the WMAP and Planck satellites. Realizations of the standard quantum field theory cosmologicalmodel are shown to produce comparably small correlation with probabilities ranging from ≃ 10^-4.3 to ≃ 10^-1.5, depending on the map and range of angular separation. These measurements are interpreted as evidence for a causal symmetry based on a basic physical principle not included in the effective field theory approximation to cosmological quantum gravity: quantum fluctuations only generate physicalcorrelations of spacetime curvature within regions bounded by causal diamonds. Theoretical implications and further cosmological tests of this interpretation are briefly discussed. § INTRODUCTION In classical cosmology, gravitational potential perturbations on the largest scales at late times approximately preserve the primordial pattern laid down on those scales at the earliest times <cit.>. In the context of cosmic inflation, the temperature anisotropy of the cosmic microwave background (CMB) on large angular scales is the most direct measurement of primordial quantum gravitational fluctuations because its pattern since the earliest times has been shaped only by gravity <cit.>.In spite of this unique significance, CMB correlations at large angles are generally disregarded in precision cosmological tests, primarily because the standard picture of inflation predicts a large variance in possible outcomes. This “cosmic variance” originates from the quantum model used to compute the statistical properties of primordial fluctuations, based on quantum field theory (QFT) <cit.>: on large scales, the anisotropy is mostly shaped by a small number of long-wavelength modes, with a range of independent amplitudes and phases.At the same time, the CMB correlation function C(Θ) over a range of large angular separation Θ is measured to have absolute values that are anomalously small compared to expected realizations of the QFT model <cit.>. This anomaly is customarily interpreted as a statistical fluke, but it could be evidence for a new physical effect, perhaps even a unique indication that QFT does not correctly describe gravitational physics at large length scales.Indeed, it is known that QFT has foundational theoretical inconsistencies with active gravity on large scales, since, among other issues, causal structure itself is not included as part of the quantum system <cit.>.Since cosmological curvature perturbations inherit theirspatial pattern from quantum fluctuations of gravity on the scale of causal horizons, they areuniquely suited tosearch for departures from QFT on these scales <cit.>.In previous work <cit.>, we used classical models of noise composed of perturbative gravitational shock displacements during inflation to estimate correlations from fluctuations coherent on null surfaces, and compared them with the observed large-scale anisotropy of the CMB. In this paper, we do not model the magnitude of nonzero correlations or the quantum dynamics of fluctuations, but instead use inflationary causal structure to identify a range of angular separation over which correlations are predicted to vanish. This purely geometrical, parameter-free approach isbased solely on boundaries of classical causal relationships and a specific hypothesis about the causal boundaries of physical quantum phenomena. Our hypothesis is that primordial fluctuations are “causally coherent,” in the sense that they only produce nonzero relict correlations within the geometrical boundaries of two-way causal relationships between world lines defined by causal diamonds during inflation. We first describe the basic causal constraints on measurement that motivate this hypothesis. Then, we analyze how boundaries in primordial causal structure map onto angular boundaries of large scale correlation in the CMB, produced by the two main gravitational sources of anisotropy on large angular scales, the Sachs-Wolfe (SW) effect and Integrated Sachs-Wolfe (ISW) effect <cit.>. We show that angular boundaries of causal diamond intersections delimit a new symmetry called a “causal shadow”: the angular correlation function C(Θ) of perturbations exactly vanishes over a wide range of angular separation Θ around π/2,C(π/2-arcsin(1/4) <Θ<Θ_0)=0,where the outer boundary extends to at least Θ_0 ≥ 2π/3 and possibly as far as Θ_0 = 3π/4.Earlier studies are consistent with this new prediction. For example,<cit.> found thatC(Θ) in the best Planck maps atΘ=π/2indeed lies in a range remarkably close to zero:-0.22 μ K^2 < C(Θ=π/2) < +2.16 μ K^2,which indicates that thetrue correlation ishundreds of times smaller (at least) than the value in typical standard realizations,and closer to zero than all but0.52% of such realizations.[In that study, the likelihood of CMB maps in the standard picturewas found to decrease even more, to8× 10^-5, if the constraint of a dipole-free zero near Θ=30^∘ was included. A more precise formulation of this additional causal constraint isaddressed brieflyin the Appendix below, but is not the main topic of this paper.]Up to now, the exactnull symmetry predictedover the broad range of Θ in Eq. (<ref>) has been hidden: because all measured maps have their dipole (ℓ=1) mode subtracted,the measured correlation differs from its primordial value everywhere except at Θ= π/2.In this paper, we extendmeasurements of the causal shadow to thepredictedrange of Θ. We implement statistical tests that account for the dipole in a model-independent way, using even-parity and dipole-corrected correlations. Galaxy-subtractedCMB maps are compared with the causal shadow prediction, and with standard inflationary realizations. Our measurements showremarkable agreement with the simple null symmetry described by Eq. (<ref>): departures from zero correlation,which we attribute mostly toerrors in Galaxy subtraction, are again orders of magnitude smaller than typical correlations instandard realizations.A rank comparison shows that such small correlations very rarely occur by chance in the standard picture. We interpret this striking result as a signature of a basic physical principle that leads to a causal constraint on quantum gravitational fluctuations: quantum fluctuations only generate physical correlations of spacetime curvaturewithin causal diamonds. Such a universal causal constraint is not consistent with the standard effective quantum field theory approximation for formation of inflationary perturbations, but could arise from a deeper structure of quantum gravity. The constraint on large-angle correlations would not significantly affect the broadly successful late-time concordance of the standard slow-roll inflationary scenario with smaller-scale CMB anisotropy and large-scale cosmic structure, since it is consistent with the standard prediction of a slightly tilted, direction-averaged 3D power spectrum of curvature perturbations. We briefly discuss further observational tests of this interpretation, and its implications for quantumcosmology andfundamental gravitational theory.§ CAUSAL RELATIONSHIPS§.§ Conformal causal structure The standard Friedmann-Lemaître-Robertson-Walker cosmological metric can be written asds^2 = a^2(t) [c^2dη^2- dΣ^2],where t denotes proper cosmic time for any comoving observer, dη≡ dt/ a(t) denotes a conformal time interval, a(t) denotes the cosmic scale factor, and the spatial 3-metric in comoving coordinates isdΣ^2 = dr^2 + r^2 dΩ^2,where the angular separation dΩ in standard polar notation satisfies dΩ^2 = dθ^2 + sin^2 θ dϕ^2. Light cones and causal diamonds are defined by null relationships in comoving conformal coordinates,dΣ = ± cdη.Thus, in conformal coordinates, cosmological causal relationships are the same as those in flat spacetime. Some key relationships are shown in Figure (<ref>).§.§ Causal relationships during inflation Cosmological inflation <cit.> was introduced to solve a conceptual problem with initial conditions, sometimes called the “horizon problem”: if the cosmic expansion slows with time (ä<0), causal connections are only possible over smaller comoving regions in the past, so there is no causal mechanism for generating any kind of correlations in the initial conditions.Inflation solves the main problem by introducing early cosmic acceleration, so that the comoving causal horizon moves closer with time rather than farther away. If the scale factor a(t) undergoes many orders of magnitude of expansion during early acceleration (ä>0), even very distant comoving world lines were once in causal contact. This causal relationship is shown in Figs. (<ref>) and (<ref>): given enough inflation, at a sufficiently early time, any comoving world line at finite separation lies within the past light cone of another at the end of inflation, its inflationary horizon H. A causal connection in principle provides an opportunity to account for the large scale near-uniformity of the universe, as well as large-scale, small amplitude perturbations from quantum fluctuations. §.§ Causal relationships and coherence A measurable phenomenon requires a two-way causal relationship between elements of a physical system. For both classical and quantum systems that are extended in space, the relationship is constrained by the causal structure of spacetime. Quantum mechanics fundamentally describes the world in terms of relationships between elements of systems, that ultimately lead to observed correlations. These relationships must also obey causal constraints. Construction of a physical model of a system that is extended in spacetime must include a model for the spacetime coherence of quantum states. The nonlocal character of systems distributed in space and time can have counterintuitive physical consequences that can appear magical or even “spooky.”Formally, two-way physical relationships with a timelike interval on a world line are bounded by an invariant object called a “causal diamond,” a region enclosed by the future light cone of the initial event and past light cone of the final event on the interval, that intersect on a spherical boundary. A causal diamond bounds all the possible round-trip physical “relational conversations” between the interval and other locations. For a quantum system, the outgoing light cone bounds the preparation of states in the future of the initial event, and the incoming light cone bounds the reduction of quantum states that lead to classical correlations in the past of the final event.Causal diamonds therefore feature prominently in analysis of nonlocal quantum measurements, especially for delayed-choice experiments that fully control state preparation <cit.>. The entropy of causal diamonds also provides a thermodynamic (or holographic) foundation for derivations of classical gravity <cit.>. General theoretical arguments indicate a central role for causal diamonds in quantum theories of geometry <cit.>.As a concrete example of a nonlocal coherent geometrical quantum state bounded by a classical causal diamond, consider the effect of gravity in an Einstein-Podolsky-Rosen (EPR) thought-experiment. A positronium atom decays into a pair of photons, whose gravity disturbs a set of clocks on the surface of a sphere <cit.>. The gravitational effect on the clocks can be compared at a single location by an observer in the center,at the original position of the positronium.Themeasurement event happens at the upper vertex of the causal diamond enclosed by the outgoing gravitational shock and the subsequent incoming null data from the clocks.The gravitational effect of the photons is a large-angle distortion of the causal diamond aligned with the decay axis, withradial displacement and angular dependence independent of radius. Since the decay axis is a quantum degree of freedom, the angular distortion axis of the causal diamond is entangled with the particle decay axis, and remains in a coherent superposition until the measurement occurs. The decay places spacetime into a quantum superposition of macroscopically different shapes that extends over causal diamonds of any size.The foregoing remarks motivate the following hypothesis: causal diamonds define spacetime boundaries of physical causal relationships, and hence of physical correlations that arise from quantum fluctuations. From this hypothesis, we derive the causal bounds on angular correlation below. Boundaries of causal relationships in conformal spacetime are shown in Figs. (<ref>) and (<ref>).§.§ Causal formation of perturbations We assume as usual that cosmic perturbations arise from quantum fluctuations during inflation. Inflationary horizons created by the accelerating expansion convert quantum fluctuations into classical perturbations, a quantum state reduction that eliminates indeterminacy of space-time curvature associated witha quantum superposition of different possible outcomes. Vacuum fluctuations “freeze” into a classical configurationwhen their wavelength approximately matches the scale of the inflationary horizon H. Freezing occurs onsmaller comoving slices of H as inflation proceeds, and finally ends on a microscopic scale when cosmic acceleration stops at the “era of reheating.” We argue here thatthe usual approximations used to describe the conversion of quantum fluctuations into classical curvature perturbations do not correctly account forphysical causal relationships.In the standard scenario, an initial vacuum state is prepared acausally in relation to the frame defined by an unperturbed classical background universe. This state already assigns a coherent amplitude and phase to each field mode, whose random values specify a realization ofan ensemble. Then, the gradual freezing of each mode is controlled by a wave equation: coherent oscillations of each comoving quantum field mode cool by cosmic expansion into a frozen classical state, determined by the initial phase.The global configuration thus frozen is interpreted as a classical metric perturbation, sothe global reduction of the state is observer-independent. We can say that the standard QFT inflation scenario describes the conversion of quantum vacuumfluctuations into classical curvature perturbations as expansion-driven cooling of randomly initialized coherent standing plane waves. This approximation omitscrucial physical features of a quantum system bounded by actualinflationary horizons. The actual inflationary horizon of each world lineisan incoming spherical null surface that terminates onthe world line at the end of inflation. A horizondefines a one-way boundary ofcausal relationships with its world line, like the horizon of a black hole:information only passes through it in one direction, an asymmetry not consistent withthe assumed coherent superposition of opposite propagation directions ina standing wave. Furthermore, an actual physicalhorizon is a sharp causal boundary on a spherical null surface, which is neitherplanar nor wavelike. On such a surface, preparation and reduction of states of modes on all scales entangle them in an observer-dependent way that is not accounted for in the standard picture.As discussed above, physical quantum statesentangle only withincausal diamonds.Our hypothesis is that quantum fluctuation states create correlations of classical perturbations only as far as their entanglement within physical causal boundaries, that is, actual horizons. Coherent states of fluctuations incausal diamonds nested within horizons around every world line lead to relict correlations between world lines with the sharp angular causal bounds derived below. The analysis below uses constraints imposed by causality and the hypothesized causal coherence of quantum fluctuations to derive boundaries of where entanglement, and thus correlation, is possible in principle. Unlike the QFT set-up, these geometrical constraints capture causal constraints on coherent state preparation and reduction, including sharp comoving spherical boundaries of incoming and outgoing information defined by intersections of different horizons. The horizon scales in the two systems are the same, but the standard model does not apply sharp causal boundaries of spatial, temporal, and directional coherence on quantum states. These contrasting causal constraints on the conversion of inflationary fluctuations into classical perturbations ultimately depend on different models for connecting nonlocal quantum phenomena with extended structures in spacetime. These models in turn depend on how locality and causality emerge from a quantum system, which remains an unsolved problem in quantum gravity. Measurements of causal symmetries in CMB maps are important because they can constrain this deeper level of fundamental theory.§ CAUSAL CORRELATIONS IN THE CMB§.§ Angular spectrum and correlation function In standard notation, the angular pattern of a quantity Δ on a sphere can be decomposed into spherical harmonics Y_ℓ m(θ,ϕ):Δ(θ,ϕ)= ∑_ℓ∑_m Y_ℓ m(θ,ϕ) a_ℓ m.The harmonic coefficients a_ℓ m then determine the angular power spectrum:C_ℓ= 1/2ℓ+1∑_m= -ℓ^m=+ℓ | a_ℓ m|^2. The angular correlation function is given by its Legendre transform,C(Θ) = 1/4π∑_ℓ (2ℓ +1)C_ℓ P_ℓ (cosΘ).It corresponds to an all-sky averageC(Θ)=⟨Δ_1 Δ_2⟩_Θfor pairs of points 1,2 separated by angle Θ, or equivalently, averages of every point multiplied by an average on an azimuthal circle at a polar angle Θ away from the point. §.§ CMB anisotropy on large scales On large angular scales, temperature anisotropy in the CMB is largely determined by primordial curvature perturbations Δ of the cosmological metric on a thin sphere at the location of the last scattering surface, known as the Sachs-Wolfe effect (SW) <cit.>:δ T(θ,ϕ)∝Δ(θ,ϕ). Gravity also introduces anisotropy on large angular scales much later during the dark energy dominated era, the late-time integrated Sachs-Wolfe effect (ISW) <cit.>. This effect is generated by primordial perturbations in 3D, as the CMB light propagates through space on our past light cone. In the linear regime, it is determined by the same invariant local scalar potential Δ that preserves its the original primordial spatial distribution from the end of inflation. However, anisotropy from this effect comes from perturbations at comoving distances much smaller than the last scattering surface, and from correlations of matter at different radii. To include this effect, it is necessary to analyze causal constraints in three comoving spatial dimensions.We will neglect other physical effects, such as radiation transport and Doppler motion at recombination, which do not modify the angular spectrum significantly at spherical harmonics with ℓ≲ 30 <cit.>.§.§ Geometrical bounds on angular correlation §.§.§ Geometrical causal shadows We adopt the general premise that primordial perturbations are formed by a quantum process that only produces correlations of perturbations at events that share a causal history, as defined by causal diamonds during inflation. In general, there are then finite regions of angles in which no angular correlations can occur, due to constraints on entanglement that arise from geometrical relationships between causal diamonds. We refer to a region of angles where no entanglement can occur as a “causal shadow.”According to Eq. (<ref>), cosmological causal diamonds for intervals on comoving world lines have equal conformal time duration before and after the time of their bounding sphere. Causal diamonds for conformal time intervals symmetric around the end of inflation define a special set of boundaries, because they bound the inflationary period when quantum vacuum fluctuations freeze into the classical metric. As shown in Fig. (<ref>), they determine the maximal comoving extent of incoming and outgoing information from a world line during inflation up to any given conformal time η_0.Also shown in Fig. (<ref>), a set of causal diamonds is nested within the inflationary horizon H, the past light cone at the end of inflation that bounds information incoming to any world line. Each causal diamond has a boundary of outgoing information where the inflationary horizon intersects a spherical surface of constant conformal time, or “horizon footprint.” Taken together, circular intersections of these comoving spherical surfaces define the boundaries of outgoing and incoming null information, and angular boundaries of coherent causal relationships during inflation. Whether or not these boundaries map onto regions where the correlation function vanishes, and the angular structure of the causal shadows, depends on how frozen classical perturbations emerge from quantum fluctuations of geometry.§.§.§ Classification of causal boundaries Here we consider causal angular boundaries according to several geometrical criteria. We call these three shadows the “general 3D shadow,” the “general thin sphere shadow,” and the “maximal thin sphere boundary,” respectively. The derivation of all three boundaries is predicated on the assumption that correlated primordial perturbations are connected by a causally coherent quantum process. The third, the maximal thin sphere boundary, requires an additional assumption, explained below, about independence of orthogonal directions.The most general constraint (the “general 3D shadow”) applies to an angular separation observed by A, between a distant comoving location B and any other location C along A's past light cone closer than B. This constraint includes cross correlations in three dimensions. Itgeneralizes the thin-sphere causal shadows considered in earlier work <cit.>, which allows us to include causal constraints on both SW and ISWanisotropy, and thereby account for a nearly exact symmetry for gravitational anisotropy in the CMB. As described below, this generality allows for a completely parameter-free, model-independent test, based on the even-parity component of correlation. The “general thin sphere shadow” similarly applies for angular correlations constrained by intersection of causal diamonds, but only refers to angular relationships between B and other points at the same comoving separation from A. This constraint is conceptually the simplest kind of causal relationship to visualize: any correlation with B vanishes beyond the angular separation where its horizon intersects A's horizon. A thin sphere is a good approximation at large angular separation for the CMB anisotropy near the last scattering surface, so it is a good approximation for SW anisotropy, but not for correlations generated by the ISW effect at late times.The “maximal thin sphere boundary” applies to causal angular relationships for correlations bounded by null planes (or causal diamonds of infinitely distant points) in orthogonal directions. This boundary would apply to possible exotic antihemispherical correlations of emergent primordial potential on spherical horizon footprints. §.§.§ General 3D causal shadow Consider measurements performed from world line A at some time η_0 after inflation ends (Fig. <ref>). Measurements are affected by structure on its past light cone. Its horizon is a sphere S_A of comoving radius R_A equal to cη_0. This sphere bounds a causal diamond with equal conformal duration η_0 before and after the end of inflation.As discussed above, suppose that boundaries of causal correlations are defined by wherecausal historiesintersect during inflation. The causal history of A is bounded by its inflationary horizon H_A, and the causal history ofBis bounded by its inflationary horizon, H_B.For A to measure correlations with a point B that lies on S_A, a third point anywhere in the interior of S_A must share a causal history with both A and B, in the following sense:its world line during inflation must lie within causal diamonds of both A and B that intersect each other. That is, an interval of its world line must be shared by A and B causal diamonds. For this to happen, a causal diamond of A must start prior to where the A world line intersects with H_B. As shown in Fig. (<ref>), this criterion defines a minimal diamond with a radius R_A/2. The circular intersection of its spherical footprint S_AB/2 with S_B is the farthest angle from B of any locations whose perturbations are correlated with the direction of B on S_A.Since this bound applies for correlations with B in any direction, it directly translates (via Eq. <ref>) into a causal shadow in C(Θ) outside this angular separation. Thus, as shown in Fig. (<ref>), C( |Θ - π/2 |< Θ_0, 3D )= 0,whereΘ_0, 3D =arcsin(1/4)≃ 14.48^∘ .This construction is scale invariant, since it depends only on the conformal causal structure shown in Fig. (<ref>). The general 3D causal shadow applies to angular cross correlation between two horizon footprints of different radii: structure on any horizon footprint only correlates with other foreground footprints larger than half its radius.The shadow applies toangular cross correlation of gravitational anisotropy determined by null propagation through foreground matter shells, including late-time ISW.§.§.§ General thin-sphere causal shadow The same considerations can be applied to the autocorrelation of S_A with itself, determined by the causal history of two points that lie at the same comoving distance from A.The angular causal boundary then occurs at theintersection of S_A and S_B,Θ_0, 2D = π/6,so the shadow cone has an opening angle Θ= π/3. This causal boundary has a straightforward intuitive meaning.At the time the AB relationship is freezing out— when B passes through H_A—other comoving locations on S_A with angular separation greater than Θ= π/3 lie outside of H_B, and therefore have fluctuations that are uncorrelated with B. Simply put, locations on S_A with angular separation from B greater than Θ= π/3 lie outside the causal diamond of S_B, so there is not time during inflation to form a two-way causal relationship with B. Such an exact primordial symmetry of perturbations on a thin sphere results in an approximate CMB symmetry, a vanishing of Sachs-Wolfeanisotropy without ISW added. §.§.§ Maximal thin-sphere causal boundaryThis boundary is not a shadow in the same sense as those just described, butrepresents a causalbound on possibleexotic correlationsin relation to an antipodal direction from coherent displacements of an emergent horizon footprint from its center, as shown in Fig. (<ref>) and explained further below.As shown in Fig. (<ref>), the maximal boundary corresponds to the angle from an axis at which points on the axis are equidistant to the observed sphere and the observer,Θ_0, max= π/4.A null plane that intersects a horizon footprintat this separation from an axis arrives at the observer at the same time as an orthogonal null plane. Within this angle, points on anyfootprint of H_A can receive information from events on the axial null trajectory on H_A prior to the end of inflation, so they can have a causal correlation with an axial displacement. The maximal boundary assumes independence of classical orthogonal directions relative to classical infinity, which is a more restrictive assumption than the causal-diamond overlap criterion adopted forcausal shadows derived above. §.§.§ Causal shadow at π/2<Θ<3π/4 The relationships shown in Fig. (<ref>) show boundaries of locations that share direct, classical two-way causal relationships with two world lines A and B during inflation. Causal correlations viewed by A are confined to angular separations on a horizon footprint within a “shadow cone” that has an opening angle π/2-Θ_0 with respect to the B direction: that is, there is zero correlation at at all angular separations Θ>π/2-Θ_0.In particular, there is no causal connection of points on the horizon footprint surfaces at angles π/2<Θ<π. Because A itself lies on H_B, we should also allow for the possibility of“exotic” causalcorrelations at Θ>π/2 from directional displacementof A relative to its horizon, alongthe direction of the AB axis. These correlations also have an angular causal boundary, now defined with respect totheantipodal point B'.The angular boundary forcausal connection with an axial displacement is given bythe maximal causalboundary (Eq. <ref>), as shown inFig. (<ref>). AtΘ> 3π/4,information from events on the AB' axis of A's inflationary horizoncan reach points on S_A beforethe end of inflation, so in principle it can produce a causal correlation of perturbationsonS_Ain the AB' antipodal cap.In previous work <cit.> we introduced models of this coherent antihemispheric correlation, based ontime shifts by a classical gravitational null planar shock, where the analogous effect is a shift of the observer's clock due to a shock displacement in a particular direction. Since the aim in the currentstudy is to test the null symmetry without model-dependent assumptions, we adopt a conservative approach that avoids models of exotic antihemispheric correlations: we simply posit a causal shadow in relation to Θ=π with constraintslike those in relation to Θ=0. Thisconstraint allows us to test forcausal shadows with zero correlation in a range Θ < π/2+ Θ_0, max or Θ < π/2+ Θ_0, 2D, but does not exclude possible exotic nonzero correlation closer to the antipode. §.§.§ Asymmetric causal shadow in total C(Θ)CMB anisotropy from SW and ISW effects arise at very different comoving distances, only some of which can create correlations at Θ>π/2. For this reason, the causal shadow of total correlation including both effects is not symmetric around Θ=π/2. Late-time ISW is produced mainly by foreground structure at redshift z≲ 1, when the effect of dark energy modifies the background expansion significantly from that of a matter-dominated universe. Matter at these low redshifts is much closer than the radius of the smallest causal diamond entangled with thelast scattering surface. Since the ISW effect from matter entangled with exotic displacement of that surface isnegligible,ISW correlations essentially vanish for Θ>π/2-arcsin(1/4).As explained above, let us allow for the possibility of correlation at Θ>π/2from exotic antipodal thin-sphere SW anisotropy on the last scattering surface. Thecausal shadow in this case should conservatively extendto Θ=π/2+Θ_0, 2D=2π/3, sothe total shadow extends at least over the asymmetric range,C([π/2-arcsin(1/4)]<Θ<2π/3)=0.Inprinciplethe thin-sphere causal shadow could extendto the maximal boundary atΘ=π/2+Θ_0, max=3π/4,in which case the causal shadow of total correlation extends over a wider range,C([π/2-arcsin(1/4)]<Θ<3π/4)=0.Weimplement tests of both of these possibilities. § CAUSAL SYMMETRY OF CMB MAPS§.§ Dipole subtraction and parity separation It is not possible to measure the true primordial pattern because the dipole components a_1m have been removed from the maps to compensate for the local motion relative to the local cosmic rest frame. This motion is induced by perturbations, including our nonlinear orbits within the galaxy and the Local Group <cit.>.Nevertheless, a small fraction of the subtracted dipole is part of the intrinsic large-angle primordial pattern on spherical causal diamond surfaces and contributes to correlation in the angular range of causal shadows. Thus, the whole shadow symmetry can only become apparent when the intrinsic portion of the dipole is included. Consequently, if the shadow symmetry is a true symmetry of the observed CMB temperature, then there must exist a dipole that can be added to the observed CMB temperature map that realizes the shadow symmetry.The total correlation (Eq. <ref>) is a sum of even and odd Legendre polynomials, which are respectively symmetric and antisymmetric about Θ=π/2. To produce zero correlation over a range symmetric around Θ=π/2, no combination of even functions can cancel any combination of odd ones, so if an angular correlation function vanishes over a range [π / 2 - ϕ, π / 2 + ϕ] symmetric about Θ = π / 2, the even contributions and the odd contributions to the angular correlation function must vanish independently over that range.This property allows a direct, model- and dipole- independent test of causal symmetry, that uses only even-parity correlation. Over the angular range of a general 3D causal shadow (Eqs. <ref> and <ref>), where total correlation vanishes in both hemispheres, the sum of even terms must vanish on its own, independently of any dipole or model parameters. Furthermore, if the primordial angular correlation vanishes over an arbitrary range [α, β], then the sum of the even and odd Legendre polynomials for ℓ > 1 must be able to be cancelled over that range by an unknown dipole, a function of the form𝒟(Θ)= 3/4πC_1 cos(Θ),where C_1 ≥ 0. Over the angular range of the total asymmetric shadow (Eq. <ref> or <ref>), the sum of the even and odd Legendre polynomials for ℓ > 1 must vanish after addition of a dipole of unknown amplitude. If the causal shadow symmetry is real, a perfect map of primordial anisotropyshould satisfy both of these criteria. Our aim is to measure how closely the observed CMB data approximates these causal shadow predictions, and how its agreement compares with standard model realizations. §.§ Data As explained in <cit.>, we use all-sky CMB maps made with subtracted models of Galactic emission, in order to minimize correlation artifacts introduced by masks. Our analysis is based on foreground-corrected maps of the CMB temperature based on the fifth and third public release databases of the WMAP and PLANCK collaborations, respectively. In the case of PLANCK, we use several different maps based on different techniques for modeling the Galaxy. Recognizing that the noise properties of the foreground-corrected maps are not well characterized and that the 2-point function is correlated between angles, we use the variation between foreground subtraction methods and experiments as a proxy for correlation function uncertainty. We only compare integrated residuals of measured values and standard model realizations of 2-point correlation functions.For this paper, we used the python wrapper for the Hierarchical Equal Area isoLatitude Pixelization (HEALPix) scheme <cit.> on maps at a resolution defined by N_side = 256. We preprocessed the maps by converting them to this resolution and removing their respective dipole spherical harmonic moments. We conduct all measurements and operations on each map independently.§.§ Correlations of maps andrealizations To generate standard model realizations, we used the Code for Anisotropies in the Microwave Background (CAMB) <cit.> to calculate C_ℓ^ SM with the following six cosmological parameters from the Planck collaboration <cit.>: dark matter density Ω_c h^2 = 0.120; baryon density Ω_b h^2 = 0.0224; Hubble constant H_0 = 67.3; reionization optical depth τ = 0.054; neutrino mass m_ν = 0.06 eV; and spatial curvature Ω_k = 0.001. For eachrealization, we calculated the angular power spectrum up to a cutoff of ℓ_max = 30 by Eq. <ref>. Then, we determined C(Θ) by summing Eq. <ref> up to the sharp cutoff ℓ_max = 30. Correlation functions of realizations and CMB maps are shown in Fig. (<ref>). According to the causal shadow hypothesis,the even-paritycorrelation should vanishin the range of angles where all contributions to both odd and even contributions vanish (Eq. <ref>).As verified quantitatively by the rank comparison described below, the maps are indeed much closer to zero over this range than almost all realizations.Both the prediction and measurement in this comparison are model- and parameter-free, so the close agreement with zero in the expected range isstriking.Outside this range, where ISW anisotropy does not have even parity, the unmeasured dipole must be included to see the null symmetry, since odd- and even-parity harmonics must both be included to describe the total causal shadow.For a map with a causal shadow symmetry, an added cosine function (Eq. <ref>) reproduces the effect of restoring the unobserved intrinsic dipole.Standard realized correlation functions include only ℓ>1 harmonics. For the comparison shown in Fig. (<ref>), each realization is modified with a function of the form in Eq. (<ref>) to minimize its residuals from zero. For realizations, this term does not have any relation to an intrinsic physical dipole: it is a “mock dipole” added to estimate how frequently the sum of ℓ>1 harmonics in Eq. (<ref>) comes as close to the maps as those of a causal shadow model. As is the case for the even parity correlation, it appears that the measured C(Θ) in the total shadow range (Eqs. <ref> or <ref>) is closer to zero than almost all standard realizations, even when they have a mock dipole added.§.§ ResidualsThe striking visual impression of anull symmetry in the measured correlation can be verified quantitatively by a rank comparison of residuals.Let C_ℓ denote the angular power spectrum of a measure CMB temperature map or a standard model realization. Then, we can define this power spectrum's even-parity angular correlation function C_even(Θ) asC_even(Θ) = 1/4π∑_ℓ = 2, 4, 6, ...^ℓ_max(2ℓ +1)C_ℓ P_ℓ (cosΘ),where ℓ_max = 30. Let {Θ_j, [α, β]}_i = 1^N denote a uniformly spaced lattice of points in the range [α, β]. Then, define the residualΔ_even, [α, β](C(Θ))≡∫_α^β|C_even(Θ)|^2 dΘ≈∑_i = 1^N [C_even(Θ_i, [α, β])]^2 ·(β - α/N),and the residualΔ_best-fit, [α, β](C(Θ))≡∫_α^β|C̃_β(Θ)|^2 dΘ≈∑_i = 1^N [C̃_β(Θ_i, [α, β])]^2 ·(β - α/N),whereC̃_β = C + 𝒟_best-fitand 𝒟_best-fit is the dipole contribution that minimizes the residual. Next, letΔ_even ≡Δ_even, [π/2 - Θ_0, π/2 + Θ_0], Δ_general ≡Δ_best-fit, [π/2 - Θ_0, 2π / 3], Δ_maximal ≡Δ_best-fit, [π/2 - Θ_0, 3π / 4].We use these three residuals as a measure of how compatible a given power spectrum {C_ℓ}_ℓ > 1 is with the causal shadow symmetry. Each of the integrals must vanish for such a power spectrum that exactly agrees with the causal shadow symmetry. In practice, we found that N = 2000 is a sufficiently high lattice resolution to approximate the integrals among different data sets and standard model realizations with negligible error.§.§ Rank comparison of shadow symmetry with standard realizations Our three tests are as follows. First, we generate N = 2· 10^6 standard model realizations. We then evaluate Δ_even (C(Θ)), Δ_general (C(Θ)), and Δ_maximal (C(Θ)) for these standard model realizations, such as those shown in Fig. (<ref>), as well as the different measured CMB maps. For a given residual Δ_even, Δ_general, or Δ_maximal, the variance between the values of this residual for different measured CMB maps gives a measure of the sensitivity of the residual to Galactic model uncertainties.The top panel of Fig. (<ref>) shows cumulative probability, the fraction of standard realizations with Δ_even smaller than the value shown on the horizontal axis, and the vertical lines show the values of Δ_even for different measured CMB maps. We find that a small fraction of standard model realizations, ranging from 10^-1.5(for WMAP) to 10^-4.3 (for NILC), come as close to zero Δ_even as the measured CMB temperature maps.The middle panel shows the same quantities evaluated for Δ_general, with the dipole-term adjustment described above. We again find that a small fraction of standard model realizations, ranging from about 10^-1.8 (for WMAP) to 10^-2.8 (for Commander), come as close to zero Δ_general as the measured CMB temperature maps. The values of C_1 for the best-fit dipole for the mapsNILC, SMICA, COMMANDER, WMAP without its monopole, and WMAP with its monopole are approximately 365, 341, 322, 392, and 426 μK^2, respectively. The lower panel shows the same quantities evaluated for Δ_maximal, with the dipole-term adjustment described above. We again find that a small fraction of standard model realizations, ranging from about 10^-2.3 (for WMAP) to 10^-3.2 (for Commander), come as close to zero Δ_maximal as the measured CMB temperature maps. The values of C_1 for the best-fit dipole for the maps NILC, SMICA, COMMANDER, WMAP without its monopole, and WMAP with its monopole are approximately 384, 390, 389, 440, and 471 μK^2, respectively.To evaluate the sensitivity of this test to the chosen cutoff ℓ_max, we repeated the aforementioned tests for every ℓ_max value ranging from 25 to 35. The significance of our results, i.e. the fraction of standard model realizations having Δ_even as low as the measured CMB temperature maps, changed by less than 2· 10^-3; the fraction of standard model realizations havingΔ_general as low as the measured CMB temperature maps changed by less than 3· 10^-4; and fraction of standard model realizations havingΔ_maximal as low as the measured CMB temperature maps changed by less than 2· 10^-4.§ INTERPRETATION As seen in Fig. (<ref>), correlations measured within calculated causal shadow boundaries appear to be consistent with zero true correlation, in the sense that deviations from zero are comparable with differences between the data from different foreground-subtracted maps. Thesymmetry is particularly striking in the direct measurement of even-paritycorrelation over the range of angles where it is expected to vanish. The data are also consistent with nearly-zero total correlation over the calculated range for the total shadow, after allowing for an unobserved dipole.The standard interpretation is that the tiny correlation of the real sky represents a statistical anomaly, but thisoccurs in a only small fraction of realizations. A rank comparison (Fig. <ref>) shows that the sky agrees with zero better than almost all standard realizations. The most significant effect appears in the even-parity symmetry: Planckmapsshow deviations from zeroseveral orders of magnitude smaller than typicalstandard realizations,which happens in only a fraction10^-2.8 to10^-4.3 of realizations, depending on the map. Our proposed interpretation is that the small correlation is due toa fundamental causal symmetry of quantum gravity. It results directly from the basic physical principle of causal coherence: quantum fluctuations only generate physical correlations in regions bounded by causal diamonds. In this interpretation, the small measured departure from zero correlation is attributed to measurement error dominated by contamination by the Galaxy. Thisview is consistent with measured variation among the different maps. § CONCLUSIONThe surprisingly small absolute value of the large-angle CMB correlation function has been known since the first measurements with COBE <cit.>. Better measurements from WMAP <cit.> and then Planck <cit.> show values successively closer to zero.This anomaly and others have been thoroughly analyzed (e.g., <cit.>, <cit.>, <cit.>), but are not generally thought to present acompelling challenge to standard cosmological theory, partly because there has not been a precisely formulated and physically motivated alternative expectation.As shown here, the anomalously small correlation becomes even more striking if the possibility of an intrinsic, unobserved dipole component is accounted for. Direct measurements of even-parity correlation, and of total correlation after allowing for an intrinsic dipole, both reveal an amplitude for the 2-point function that is significantly smaller than generally appreciated. Indeed, themaps appear to be consistent with zero correlation over a specific range of angular separation that coincides with that expected from an absence of correlations between timelike intervals on world lines without ashared causal history, determined by overlap of causal diamonds during inflation. We account for this apparent symmetry with a newphysical hypothesis. Our direct geometrical reasoning explains how a causal correlation shadow like that observed can emerge from standard, covariant, two-way causal relationships, and precisely specifies its angular boundaries. Unlike some other candidate explanations for large-angle CMB anomalies, it is based only on well established principles that govern causal physical relationships. Our analysis takes account of all significant gravitational contributions of primordial perturbations to large-angle CMB correlation, including ISW.We thus interpret the causal shadow not as an anomaly, but as an exact symmetry of quantumvacuum fluctuation states that are coherent on causal diamonds.This quantum “spookiness” of gravitational fluctuations is not consistent with the standard model of inflationary perturbations based on quantum field theory, but wouldberequired for a theory of quantum gravity to addresslong-wavelength causal inconsistencies with QFT.Our interpretation isconsistent with currenttests of classical concordance cosmology, which depend only on a nearly scale-invariant initial 3D power spectrum of perturbations, and with applications of QFT that do not involve quantized gravity. Moretests are possible of the two-point correlation shadows studied here, as well as other causal CMB signatures, and these will improve with all-sky maps of the Galaxy that allow better measurements of the true CMB pattern on the largest scales. Universal causal constraints oncorrelationscan be further tested withmaps of late-time large-scale structure, especially with surveys of large-scale linear structure that can reveal whether inflationary perturbationsaround other locations reveal the same exotic causal constraints as the CMB. Some of these tests are discussed briefly in the Appendix. Standard inflation theory assumes, in spite of long known theoretical difficulties,that QFT describes thebehavior of physical quantum gravity on large scales.Our analysis reveals signatures ofa real-world symmetry that contradicts this assumption, but conforms with coherent quantum geometrical fluctuations expected in a deeper,causally consistent fundamentaltheory. It appears thatlarge-scale correlations of CMB anisotropy may provide direct, unique and specific evidence for a deeper theory. For data access, we acknowledge use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA), part of the High Energy Astrophysics Science Archive Center (HEASARC), and the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.aasjournal § APPENDIX §.§ Implications of causal coherence for standardcosmology A causal symmetry of primordial perturbationsrequires a causal spacetime constraint on quantum coherent states that is inconsistent with the standard QFT model used to compute classical perturbations that form from quantum vacuum fluctuations.At a basic level, a causal shadow is incompatible with standard cosmic variance, which predicts vanishing angular correlation only for a set of angular separations of measure zero. In the standard QFT quantum model, orthogonal modes are assumed from the outset to commute; hence, the model of horizons and mode freezing for each axis is a separable one dimensional system, which leads to the standard cosmic variance for realized classical angular correlation.Causal shadows over a finite range of angles can only appear if there is gravitational entanglement with causal structure, so that orthogonal components of fluctuating position and momentum do not commute for quantum states on the scale of whole causal diamonds. As noted previously, such holistic behavior may be needed in any case to address the IR problems of QFT <cit.>, and can be accommodated in principle by “holographic” theories of gravity <cit.>.Causal coherence has previously been invoked tomotivate direct laboratory searches forquantum fluctuations of causal structure <cit.>. Although causal coherence of gravitational fluctuations isnot possible inQFT,it is consistent with the myriad predictions of QFT that do not include gravitational quantum interactions: cosmological perturbations are the only currently measured phenomenon that depends on active quantum gravity. Acausal shadow in the CMB would be a unique direct signature ofquantum gravity thatconflicts with QFT, and as such, would provide a specific empirical constraint on holographic quantum theories of gravity.Since coherenceof fluctuations in causal diamonds requires a fundamentally different quantum model for the formation of perturbations, it alters aspects of cosmological theory that depend in detail on the internal dynamics of the QFT model<cit.>.For example, causally-coherentgravitational fluctuations aremuch larger thanQFT fluctuations for a given inflationary expansion rate H; the variance of curvature on the horizon scales like Δ^2≃ H in Planck units, as opposed to Δ^2≃ H^2 in the QFT picture.Thus, estimatedparameters for the effective inflaton potential based on fluctuation amplitude and tilt need to be modified in a causally-coherent theory.In spite of these fundamental theoretical differences, causal coherence does not alter predictions based on classical relativity. In the context of late-time concordance ΛCDM cosmology,the main observable constraints on standard inflation depend only on a direction-averaged 3D power spectrum of relict curvature perturbations. In standard theory, QFTproduces inflationary perturbations with the required nearly-scale-invariant power spectrum, given an effective inflaton potential with a suitably tuned amplitude and slope. The samespectrum would also be produced in a holographic model with a smaller, and suitably slowly-changing inflationary expansion rate. If causal shadows result from causal coherence of quantum geometry, that coherence extends without bound to horizons of any macroscopic scale. The reduction of a quantum state to the specific observed classical pattern of perturbations is an ongoing process that unfolds after inflation on larger scales, encompassing and revealing causal diamonds that begin earlier during inflation. The existence of causal shadows thus fits naturally into a quantum cosmology that includes entanglement with states of a “participatory observer,” even on the largest scales <cit.>. §.§ Statistical isotropy and localityFormulations of cosmological theories often begin with positeda priori symmetries of the whole cosmological system, such as a “Cosmological Principle” that nature does notdefine a preferred location or direction, or the Galilean principle that all observers are equivalent.The mathematical implementation of these ideas in a quantum theory depends on how quantum nonlocality and “spookiness” are implemented in quantum gravity. The standard formulation is based on the symmetry of the classical homogeneous and isotropic FLRW cosmological solutions as a background. With a Fourier transform, the unbounded 3D space permits a unique equivalent description in comoving k⃗ space. This setup leads to a quantum system that creates a “statistically isotropic” global ensemble of spatially infinite perturbations, and an ensemble of realizations with standard cosmic variance. In any particular realization of the ensemble, a finite part of the system is manifestly not isotropic, since a horizon of any size is distorted by anisotropic perturbations from a still larger scale.By contrast, a causally-coherent, relational theory does not assume a preexisting 3+1D cosmological background. The system still adopts the basic principle that neither the laws of physics nor the universe as wholedefine a preferred direction or location in spacetime, but in a cosmological (as in any) realization of a quantum system, any observer is of necessity (and equivalently) privileged. There is no preordained infinite physical background spacetime; instead, inflation generates a large but finite system that approximates a classical FLRW solution with emergent standard causal relationships.§.§ Further tests of primordial causal coherence§.§.§ CMB maps with better Galaxy models The causal shadow prediction is unambiguous, and falsifiable in principle. The cleanest wayto disprove it would be to show that there is nonzero primordial correlation in the shadow. Conversely, the most convincing empirical reason to doubt the QFT model comes from the large likelihood contrast between the symmetry and the standard model realizations, which already exceeds 10^4 for some maps. Both of these comparisons are currently limited by uncertainties in Galactic foregrounds.For a convincing test, it is necessary to show that a causal shadow either is or is not present in the CMB pattern itself to a high degree of precision.In the analysis presented above, we have used the published all-sky, unmasked,Galaxy-subtractedCMB maps prepared by the satellite teams.We have not undertaken a close comparative study of the Galaxy models used to create thosemaps, but it is plausible that existing data from WMAP and Planck could be used to create new maps optimized for the purpose of testing a null correlation hypothesis over a specific range of angular separation. A definitive test may require betterspectral measurements thancurrent maps, asproposed in new satellite concepts such as LiteBird and PIXIE. §.§.§ Exotic correlations in3D large scalestructure Universal causal constraintsapply to the angular distribution of primordial perturbations around any point. Linear evolution of density perturbations on the largest scales approximately preserves the primordial pattern at large angular separation, in relation to any comoving location.Even on scales where the primordial pattern is modifiedby baryon acoustic oscillations before recombination, thedark matter dominates the overall density and gravitational potential bysignificant factor. Thus, residual effects of causal constraintsshould appear insubvolumes of large, complete spectroscopic surveys, such as BOSS, DESI, and Euclid<cit.>. Their signatures should appear in higher-order correlations in three dimensions.A particularly distinctive signature of causal coherence should appear as global parity violation.CMB correlations at the largest separations appear to have a negativecorrelation function, C(Θ→π)<0,which corresponds to an excess of odd over even spectral perturbation power atlarge angular separation. Thissignificant excess of net odd parity poweris detected even at higher resolution,at least to ℓ≃ 30<cit.>.Thisparity violation, if it is attributed to a universal property of holographic quantum gravitational fluctuations,should affectperturbations on all linear scales. They are not erased by baryonic oscillations, which change phase but not parity. An exotic origin of parity violation may account for recent detections of parity violation in the large-scale galaxy distribution<cit.> that are difficult to account for in the standard scenario of a QFT-based ℙ-symmetry violation <cit.>.§.§ Other causal signatures in the CMB patternWe have shown howcausal boundaries of coherent gravitational fluctuations, in the formof axially symmetric circular intersections of spherical causal diamond boundaries on the sky, canbe preserved in universal statistical properties of the CMB pattern. In general,the same geometrical relationships fromcausal coherenceshouldlead to other identifiable statistical signatures that go beyondthe correlation shadows of C(Θ) considered above.For example, a causal diamond is not entangled with horizons of world lines centered farther away than twice its horizon footprint radius, as shown by the detachment ofS_AB/2 from horizons of world lines more distant than B in Fig. (<ref>). Thus, footprints should have universal, orientation-independent angular power spectra in equatorial stripes bounded by circleswith the openingangles ofthe 3D causal shadow (Eqs. <ref>, <ref>)— amore general constraint than the correlation zero in the same range. The complementary causal boundary of entanglement with smaller causal diamonds, also seen in Fig. (<ref>), is themaximum angular separationof world lines on aspherical horizon footprint S_B that have a two-way causal connectionafter leaving the horizon H_B, but before the end of inflation.Points onS_B are disentangled in this sense if they areseparated by more than half the radius of S_B,orΘ_AB/2=2arcsin(1/4)≃ 28.96^∘.Correlations with any direction at smaller angular separation than Θ_AB/2 are independent of those at largerseparation, so their local behavior approximates thestandard QFT picture. If disentanglement produces independenteven- and odd-parity correlations in circular patches of radius Θ_AB/2 around any point and its antipode, there should be zerosinC(Θ)at Θ_AB/2 and π/2-Θ_AB/2 indipole-free maps.It is noteworthy that zeros ofCMB correlationare indeedmeasured at Θ≃ 30^∘ and Θ≃ 150^∘ in unmasked CMB maps <cit.>.These and other exotic causal signaturescan be differentiatedfrom generic QFT realizations, and from generic anomalies,byspecific, precisely calculable and measurable causal boundaries in the angular domain.They are most conspicuous at angular separationsmuch larger than in previous studies of N>2-pointfunctions designed mainly as probes of primordial nongaussianity in the QFT framework. | http://arxiv.org/abs/2312.16147v1 | {
"authors": [
"Craig Hogan",
"Ohkyung Kwon",
"Stephan S. Meyer",
"Nathaniel Selub",
"Frederick Wehlen"
],
"categories": [
"astro-ph.CO",
"gr-qc"
],
"primary_category": "astro-ph.CO",
"published": "20231226181252",
"title": "Causal bounds on cosmological angular correlation"
} |
blue 1.15 [4] equationsection remarkRemark assumptionAssumption theoremTheorem corollaryCorollary propositionProposition definitionDefinition exampleExample lemmaLemma questionQuestion *proof*Proof8pt 8pt 6.2in 8.5in by -1.1truecm by -1.0truecm | http://arxiv.org/abs/2312.16308v1 | {
"authors": [
"Fabio Di Cosmo",
"Alberto Ibort",
"Giuseppe Marmo",
"Patrizia Vitale"
],
"categories": [
"hep-th",
"math-ph",
"math.MP"
],
"primary_category": "hep-th",
"published": "20231226192832",
"title": "Symplectic realizations and Lie groupoids in Poisson Electrodynamics"
} |
Computing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded Márton Benedek1 Péter Biró1 Gergely Csáji1Matthew Johnson2 Daniel Paulusma2 Xin Ye2 January 14, 2024 =========================================================================================================== We use the statistical properties of Shannon entropy estimator and Neyman-Pearson statistics to study the predictability of ultra-high frequency financial data. We develop a statistical test for the predictability of a sequence based on empirical frequencies. We study stylized facts that cause price predictability such as persistence of order signs, autocorrelation of returns, and volatility clustering. We show that the degree of randomness grows with the increase of aggregation level in transaction time. We also find that predictable days are usually characterized by high trading activity, i.e., days with unusually high trading volumes and the number of price changes. We find a group of stocks for which predictability is caused by a frequent change of price direction. We perform multiple testing for sub-intervals of days to identify whether there is predictability at a specific time period during the day. § INTRODUCTION One of the fundamental questions in finance is the predictability of asset prices. Currently, there is a set of tools designed to predict prices in the markets. For instance, traders rely on technical analysis of stocks to make a profit. The prediction power of technical analysis has been the object of many investigations, starting with the pioneering work <cit.>. We refer to <cit.> for a review of academic studies on this subject until 2007. For example, it has been found to generate excess profits in currency markets <cit.> including cryptocurrencies <cit.>. Profitable trading strategies can be constructed with the usage of forecasting methods such as neural networks <cit.>. Various trading methods can be developed in order to increase profits starting from prediction of future prices which are even only slightly more accurate than a martingale. For instance, an automatic trading method proposed in <cit.> outperforms a range of alternative approaches[According to <cit.>, abnormal returns based on price behavior in the market decline after academic publications about the profitable strategies associated with this behavior. It is indeed natural that the degree of market efficiency can change with time and is related to environmental factors characterizing “market ecology", such as the number of competitors in the market, profit opportunities, and how the market participants adapt to changing environments. Under the adaptive market hypothesis <cit.>, much of what behavioral finance cites as counterexamples to economic rationality is consistent with the evolutionary principles of financial interactions: competition, adaptation and natural selection; see <cit.>.].Randomness of financial data aggregated to daily frequency was investigated in the literature <cit.>. Studies analyzing the predictability of intraday prices were conducted using hourly <cit.> and minute <cit.> frequencies. Prices at millisecond and second frequencies were analyzed in <cit.>. This research progresses further toward a microscopic examination of financial time series. Ultra-high frequency data is defined as the full record of transactions and their associated characteristics. This research is dedicated to the predictability of ultra-high frequency data.Long memory of price returns at ultra-high frequency is a stylized fact that incorporates predictability into the time series. Lillo and Farmer <cit.> conducted a statistical test, concluding that both the signs of market orders and executed limit orders exhibit long-memory processes. They attributed this long memory of order signs to news arrivals and order splitting, which is offset by fluctuations in market liquidity. Therefore, the long memory does not contradict efficiency of markets when prices incorporate all available information about future values <cit.>. Moreover, the high speed of occurrence of new orders makes it difficult to predict the next price before it appears in such a short period of time. Without taking into account transaction costs, we can not ensure that predictability at ultra-high frequency indicates the presence of profitable trading strategies. However, we investigate the level of predictability as a function of the length of steps in transaction time. Then, we examine the periods in which the predictability of prices is presented. We explore what price characteristics distinguish days with predictability from others. For instance, we show that the autocorrelation of price returns is statistically significant during predictable days. Moreover, for most stocks, we observe a high probability of consecutive price directions across several transactions, aligning with the long-memory characteristics of price return signs.We devise a test for randomness of data starting from the estimation of Shannon entropy. Entropy is defined as an averaged measure of uncertainty about a symbol appearing in a sequence generated by a random source <cit.>. Maximum uncertainty arises when all symbols from a finite alphabet are independently generated with equal probabilities. A common method for entropy estimation is calculating empirical frequencies of blocks of symbols <cit.>. Previous instances where Shannon entropy served as a measure for assessing randomness in financial time series include <cit.>. The entropy of price returns of assets traded on the Moscow Stock Exchange was studied in <cit.>. The time-varying entropy of meme stocks, which experienced sudden price surges driven by social media communities, was investigated in <cit.>. In case of independent and identically distributed symbols, the difference between entropy estimation and its maximum follows χ^2-distribution <cit.>. Using quantiles of the asymptotic χ^2-distribution, we can determine significantly low values of entropy fast. We make a step forward and use a modification of entropy estimation that allows us to test uncertainty for a symbolic sequence in case when probabilities of symbols may be not equal. The test statistics is based on all overlapping blocks of symbols and takes into account their dependence. Calculating the statistic is straightforward, requiring only the estimation of frequencies for preselected block lengths.In a recent work <cit.>, randomness of financial prices at daily frequency was tested using conditional probabilities. The authors tested if probabilities of price increase and decrease are the same conditionally on price history. We provide the test that considers that probabilities that a price moves up or down can be different but do not depend on previous history.We test if past price changes are helpful in predicting future price movement. To estimate Shannon entropy, we utilize discretization that distinguishes between positive and negative returns. A test for independence related to permutation entropy of price increments was introduced in <cit.>. Considering blocks of k prices, frequencies of k! events need to be calculated. In contrast, our statistic requires the calculation of only 2^k frequencies. Approximate entropy was used in <cit.> to measure irregularity of financial time series. The authors fixed the length of blocks k=3. We use a larger length for the analysis based on a given sequence length. Entropy of a singular value decomposition was applied to test market efficiency in <cit.>. Monte Carlo simulations were employed by the authors to establish confidence intervals for low entropy values.In our test, we evaluate whether new outputs remain independent of the sequence history, leveraging statistics with known asymptotic distributions. Applying the test based on empirical frequencies of price directions, we get rid of Monte-Carlo simulations.This article introduces four key contributions.First, we propose the test for randomness of a symbolic sequence. It can be applied with varying numbers of distinct symbols, presuming the sequence length is sufficiently large. Additionally, the individual probabilities associated with each symbol's appearance need not be equal. Second, we investigate the predictability of tick-by-tick data. We show that the degree of predictability decreases when we aggregate simulated and real data by a number of transactions. Third, we investigate the empirical properties of price returns and the difference in the properties of returns for not predictable and predictable time series. Some of the properties are stylized facts of price returns such as fat tails, autocorrelation of returns, and price jumps.We reveal that the probability of consecutive returns displaying the same sign (persistence) serves as a defining feature characterizing predictable sequences. For a group of stocks, this probability is significantly higher during predictable days. However, we also observe that the probability of repeating symbols is statistically low for predictable days in some stocks. For most stocks, we also find a high autocorrelation value inherent in predictable days. Other properties indicative of predictable days encompass heightened daily trading volumes and a substantial number of price changes.Finally, we localize intervals that we call predictable and find the duration of periods of predictability. We employ the Šidák correction <cit.> to conduct multiple tests to assess price predictability within a single trading day. We detect intervals with different degrees of predictability inside a group of predictable trading days. Our analysis involves examining the duration of predictability periods in transaction time. We demonstrate that typically there exists a single predictable interval within a predictable day. It's observed that two predictable intervals generally do not occur consecutively. In other words, following an interval with detected predictability, the subsequent interval doesn't display a significant level of predictability. However, considering all transactions of the SNAP stock (Snap Inc.), we are able to detect several predictable time intervals going in a row. In Section <ref>, we propose a statistical test for investigating the predictability of symbols of a sequence. Significance of values of entropy estimation and the Neyman-Pearson statistics are defined by χ^2-distributions with suitable degrees of freedom. In Section <ref>, we present a chosen group of assets with a set of characteristics such as mean price, daily trading volumes, and number of transactions. Section <ref> is dedicated to applying the predictability test to prices based on models involving long memory of order flows. In Section <ref>, we test the predictability of real data. We measure the predictability of daily time intervals while considering diverse block lengths, aggregation levels, and months. Section <ref> explores the distinguishing factors between predictable and non-predictable sequences across various stocks and trading days. Section <ref> investigates the predictability of pairs of price changes. For predictable days, we localize the presence of predictability inside predictable days in Section <ref>. Section <ref> concludes the paper.§ STATISTICAL TESTS FOR RANDOMNESS OF SYMBOLIC SEQUENCE We introduce a statistical test designed to evaluate the predictability of a sequence. The input for a statistical test is a realization X of a stationary random process 𝒳 with symbols from a finite alphabet A={0,1,⋯,s-1}. Symbols of an alphabet with size s can be denoted by integers from 0 to s-1 without loss of generality.X={x_1,x_2,…,x_n}The goal of the test presented in this section is to verify if symbols of the sequence are independently distributed. Our null and alternative hypotheses are the following.ℋ_0: The occurrence of a new symbol in sequence X is independent of the sequence's prior history.ℋ_a: Appearance of a new symbol depends on past observations of the sequence X.The test is based on empirical frequencies of blocks of symbols introduced by Marton and Shields <cit.>. These empirical frequencies serve to estimate Shannon entropy, which is utilized as a measure of uncertainty. First, we divide a sequence by non-overlapping blocks with length 1<k<n. X̂=x̂_1,x̂_2,…,x̂_n_bwhere x̂_t={x_(t-1)k+1,x_(t-1) k+2,…,x_t k}, n_b=⌊n/k⌋. x̂_t represents a number from 0 to s^k-1 in the numerical system with base s. We calculate s^k empirical frequencies f̂_j of blocks from X̂. f̂_j=∑_t=1^n_bI(x̂_t=a_j), a_j∈ A^kHere and further I is an indicator function and a_j ≠ a_l when j ≠ l. Then, entropy estimation is defined as Ĥ=-∑_j=0^s^k-1f̂_j/n_blnf̂_j/n_bwhere ln() is the natural logarithm with the convention 0ln0=0. According to <cit.> the entropy bias defined below (scaled distance from the maximum of entropy) follows χ^2-distribution with s^k-1 degrees of freedom if the probabilities of all blocks of symbols, x̂_t, are equal. B=2n_b(klns-Ĥ) When all probabilities of blocks of symbols are equal, the process is fully unpredictable and the entropy attains its maximum, klns. We can test unpredictability of a sequence using the entropy bias and the distribution under the null hypothesis χ^2_s^k-1. If the probabilities of appearing symbols (blocks with length 1) are not equal, then entropy estimation Ĥ has asymptotically normal distribution <cit.>. The mean and standard deviation of the normal distribution depend on the entropy value and probabilities of blocks, see <cit.>.Now, we propose a statistics with overlapping blocks of symbols that we apply for testing predictability. For the test, we use the Neyman-Pearson criterion <cit.> that contains the likelihood of the sequence under the hypothesis of unpredictability.We use all blocks with length 0<k-1<n-1. X̅=x̅_1,x̅_2,…,x̅_n-k+2where x̅_t={x_t,x_t+1,…,x_t+k-2}.We construct statistics in Eq. <ref> which has χ^2-distribution under ℋ_0. The similar test statistics for Markov chains was considered by Billingsley <cit.>. D=2∑_ijf_ijln(n-k+1)f_ij/f_· jf_i ·where f_ij is the frequency of the event when a block with the last symbol j ∈ A follows by the block numbered by i in the sequence X̅ and f_· j=∑_if_ij, f_i ·=∑_jf_ij; 0≤ i ≤ M-1, where M=s^k-1 is the amount of blocks of k-1 symbols. We note that f_ij are also empirical frequencies of blocks of k symbols. The convention is 0ln0=0 and 0ln(0/0)=0. The asymptotic distribution of the statistics D is described in the Lemma provided below. Let's consider a stationary process with symbols from a finite alphabet A={0,1,⋯,s-1} and its realization {x_t}_t=1^n. The empirical frequencies of the k-blocks of the realization are as follows. f_ij=∑_t=1^n-k+1I(x̅_t=a_i)I( x_t+k-1=a_j), a_i∈ A^k-1, a_j ∈ A If the hypothesis of unpredictability, ℋ_0, holds true, then the Neyman-Pearson (NP) statistics (Eq. <ref>) converges in distribution to χ^2 with (s^k-1-1)(s-1) degrees of freedom, i.e., 2∑_ijf_ijln(n-k+1)f_ij/f_· jf_i ·χ^2((s^k-1-1)(s-1)) We take k=[0.5log_sn] as the length of blocks that is proved to be admissible and suggested to use for estimating empirical frequencies in <cit.>[See Sections 2.3.d and 3.2 or <cit.>]. We provide the sketch of proof for the distribution of D from Eq. <ref> and subsequently verify it using Q-Q plots in Appendix Section <ref>. We show that the proposed test statistics is valid even if the probabilities of appearing symbols are not equal. It is possible because of the denominator in Eq. <ref> that mitigates differences in the probabilities of symbols inside the logarithm. We calculate p-values associated with values of B and D and degrees of freedom of χ^2-distribution. If a p-value is less than 0.01, we conclude that the sequence does not exhibit significant predictability with the significance level α=0.01. We reject ℋ_0 when p-value<0.01. We call intervals where ℋ_0 is rejected predictable. The NP-statistics D is similar to the Kullback–Leibler divergence <cit.> between empirical probabilities of blocks of symbols and the probabilities obtained under ℋ_0. KL(f_ij/(n-k+1),f_· jf_i ·/(n-k+1))=∑_ijf_ij/n-k+1ln(n-k+1)f_ij/(n-k+1)f_· jf_i ·=1/(n-k+1)∑_ijf_ijln(n-k+1)f_ij/f_· jf_i ·+∑_ijf_ij/n-k+1ln(n-k+1)=D/2(n-k+1)+ln(n-k+1) § DATASETWe explore limit order book data <cit.> downloaded from LOBSTER (www.lobsterdata.com). We consider the executions of visible and hidden orders of a group of assets. The chosen stocks represent diverse industries, differing in characteristics such as average price, volatility, number of transactions, waiting times between transactions, and trading volumes. Additionally, the analysis includes the ETF SPY, designed to track the S&P 500 Index. The timeframe under consideration spans from 01.08.2022 to 21.11.2022, encompassing a total of 80 trading days. Table <ref> presents the tickers of all selected assets along with their respective properties. For each day, the considered daily time interval is from 9:30 to 16:00, that is 390 minutes in total. Times of transactions are recorded with the precision of one nanosecond.Considering each occurred transaction, we work with data in transaction time <cit.>. Discretization is made by distinguishing between positive and negative returns: 0 corresponds to price decreasing, 1 corresponds to price increasing. Thus, alphabet A is {0,1} and a symbolic sequence is obtained according to binary discretization from Eq. <ref>. s^(2)_t=0, r_t<0, 1,r_t>0 where r_t=ln(P_t/P_t-1) are price returns and P_t is the price at time t.All 0-returns are removed.§ PREDICTABILITY OF LIMIT ORDER BOOK §.§ Modeling ultra-high frequency data We first apply the proposed method for testing predictability on simulated data. Executed orders and trade signs are generated using models that replicate order splitting, herd or chartist behavior, and mean-reverting process <cit.>. Positive autocorrelation in order flow was previously observed in <cit.> and <cit.>, where authors demonstrated that autocorrelation diminishes with increasing aggregation by the number of orders. The λ model proposed in <cit.> reproduces the property of the positive autocorrelation decreasing with the larger time lags. The λ model introduces a fluctuating number of hidden orders that are divided into equal pieces and submitted gradually. We simulate 80 sequences of signs according to the λ model with the length 10^5 and parameters calibrated in the paper[The number of hidden orders is N=21, the parameter of Pareto distribution describing volumes is α=1.63, and the probability of a new hidden order is λ=0.38.]. We present the fraction of predictable sequences for time lags from 1 to 50 in Figure <ref> for the λ model and two models described below. For small time lags, all sequences are predictable, then the fraction decreases but not monotonically.Another explanation of the persistence of the signs is herd behavior <cit.> when traders execute their orders according to the price trend, sometimes against their private information. The model of herd behavior is presented in <cit.>. The parameters of the calibrated model of this behavior suggest that the information obtained by the traders are noisy, which creates uncertainty about the events that occurred. The order driven (OD) model where traders rely on both a fundamental price value and the history of trades is proposed in <cit.>. The predictability for order signs (buy/sell orders) of the OD model[The parameters are taken from the article <cit.>. These parameters decrease the impact of noisy traders on the deviations of a price from its fundamental value. The authors of <cit.> introduced a modified order flow model incorporating traders' learning and adaptation. It was demonstrated by these authors that price changes within their model display the characteristic of long memory.] drops significantly for time lags larger than 1. However, the predictability of price directions is quite persistent for increasing aggregation level in the amount of transactions as shown in Figure <ref>.An aggregation level is a number of transactions taken as one time step. All 80 sequences generated by the OD model are predictable for aggregation levels from 1 to 50 due to the high probability that the price changes the direction in the next time step.Lastly, we consider the trade superposition (TS) model proposed in <cit.>. This model posits that each price change results from previous trades, with a specific trade's impact defined by a propagator that diminishes over time[The model's parameters, sourced from the paper, specify the propagator as 2.8× 10^-3/(l+20)^0.42. The logarithm of volumes follows a normal distribution N(5.5,1.8) and the noise terms have a standard deviation of 0.01.]. Figures <ref> and <ref> display the fractions of predictable sequences for order signs and price returns, respectively, based on the TS model.§.§ Analysis of real data Our analysis of real data commences by examining the limit order book of the AAPL stock. We focus on August 1 and August 2 of the year 2022 to demonstrate the behavior of the NP-statistics D and entropy bias B concerning various block lengths. Figure <ref> illustrates the duration in seconds before each transaction for these two specific days. Figure <ref> illustrates the NP-statistics with 99% confidence bounds associated with ℋ_0. The mean value of the χ^2-distribution of the NP-statistics under ℋ_0 represents the degrees of freedom. In Figure <ref>, entropy bias with corresponding confidence intervals is given. For all values of k, sequences are predictable according to the statistics B and D. On August 1, the most frequent block with length 2≤ k≤ 5 is a sequence of 0s. However, for k=6,7 the most frequent block is a repetition of 1. On August 2, the most frequent blocks for corresponding values of k are 00 and 000. When 4≤ k≤ 8 the most frequent event is the repetition of 1 k times. This observation aligns with the well-documented long memory associated with return signs.Our interest lies in identifying additional factors contributing to predictability beyond symbol repetition due to long memory. Consequently, we proceed by analyzing aggregated high-frequency data. We use transaction time, that is, we aggregate by a number of transactions. We record the last available price for each time step. We examine the predictability of assets over several months and with different aggregation levels. We plot the fraction of days which is classified as predictable for the four months under consideration in Fig. <ref>. There is a noticeable amount of days without predictable patterns even without data aggregation for a group of stocks (INTC, SNAP, F, CCL). As the aggregation level increases, there's a decrease in the fraction of predictable days, although not in a strictly monotonous manner. We couldn't establish a clear correlation between the predictability of assets and the trading months. For instance, a higher level of aggregation is required in August to diminish the predictability of SNAP price. That is, August displays a greater number of days with significant predictability in price returns compared to the autumn months. October stands out as the month with the highest number of predictable days for TSLA. The results obtained from the NP-statistics and entropy bias suggest that the largest fraction of predictable days for INTC and CCL is observed in November.With a given precision of nanoseconds, some transactions happen at the same time. These simultaneous events might represent scenarios where the volume of a market order surpasses the volume of the best buy or sell limit order. Another scenario involves automatic execution of market orders from different traders at a specific price.To mitigate the impact of such events on the analysis, we aggregate volumes and consider the final available price for each nanosecond showcasing trading activity.Figure <ref> presents the fraction of predictable days for two statistics and two datasets: full record of transactions and with aggregated transactions in each nanosecond. The exclusion of simultaneous transactions leads to a reduction in the predictability of price return signs.The empirical findings from Figures <ref> and <ref> indicate a noticeable decrease in the degree of predictability as the aggregation level of transactions increases. The decay of predictability over aggregation level fits with results obtained for the simulated data by the TS model by Bouchaud et al. <cit.>.§.§ Statistics of predictable time intervals Bouchaud et al. <cit.> investigated statistical properties of the limit order books of several stocks. In particular, the authors stated that the distribution of changes in limit order prices exhibits a power-law tail. Engle and Russell <cit.> noted that the longest duration between transactions appeared in the middle of trading days. According to the authors, clustering of transactions appears because of the size of bid-ask spread and gathering of informed traders. U-shapes of frequencies of large trades, small trades, and market orders were also discovered in <cit.>. In another paper by Engle <cit.>, he showed that intraday volatility has the similar pattern and attains its minimum in the middle of a trading day. Moreover, significant coefficients of ARMA(1,1) model were detected. Highly dependent microstructure noise was stated by <cit.>. According to <cit.>, changes in stock prices between transactions are associated with trading volumes. Some stylized facts including fat tails of price returns, volatility clustering, and leverage effect were discussed in <cit.>. For a more comprehensive understanding and detailed discussions regarding market and limit orders, we refer to works <cit.>. We show that some assets exhibit unpredictable prices for a part of days. Now, we consider different parameters of price returns time series and check if there is a dependence between them and predictability. The list of parameters follows. * First, we calculate the amount of non-zero returns that is the length of the symbolized sequence and the fraction of 0-returns.* Then, we record lengths of blocks, k. The amount of price returns used as past observations to test the independence is k-1.* We compute empirical probabilities of observing blocks with the same symbols (p̂(0…0)+p̂(1…1)) to determine if predictability appears because the price moves in the same direction. We multiply the sum of two empirical probabilities by 2^k to vanish the difference for different values of k.* We calculate |p̂(1)-p̂(0)| to check if predictability is caused by the difference in the amount of price increases and decreases during a trading day. We also write down the magnitude of daily changes in a price to determine if predictability appears when the price significantly changes. For the same purpose, we record the mean value of price returns.* We are interested in autocorrelation of non-zero returns as well as in autocorrelation of their magnitude values. The autocorrelation of magnitudes is a proxy for volatility clustering.* Then, we consider the distribution of price returns. We fit empirical price returns distribution[Here, we use scipy.stats.t.fit in Python.] and record the degrees of freedom ν, scale, and shift parameters. The smaller value of ν, the fatter tails of price returns.* We are interested in whether there is a significant difference in trading volumes.* We also compare the fraction of jumps detected among all price returns. For the detection of jumps, we use the method described by <cit.>. To employ this test at ultra-high frequency data, we average price returns as suggested by <cit.>.[We use the square root of the amount of price returns as the window size used in the method <cit.>. The method <cit.> requires pre-averaging of price returns. We use the same number of transactions for aggregation and pre-averaging. Jumps are defined with the significance level of 1%.] We conduct a statistical test to examine the differences in mean values between predictable and unpredictable days. A p-value below 0.05 indicates a significant difference with a 95% confidence level.In the case of AAPL stock, all 80 days are determined as predictable using the NP-statistics, while 79 predictable days are detected via entropy estimation. To collect days where the hypothesis of unpredictability can not be rejected, we aggregate price returns. We start from the aggregation level a=15. Mean values of the specified parameters for AAPL stock are presented in Table <ref>, while Table <ref> displays the parameter comparison between predictable and unpredictable days for all assets.Predictable days exhibit significantly higher amounts of price movement and trading volumes. Additionally, autocorrelation values for non-zero returns and their magnitudes are notably higher on predictable days. Moreover, the probability of repeating the same symbol is significantly elevated during days with predictable price returns. To mitigate predictability resulting from the persistence of signs, we increase the aggregation level. With an aggregation level of a=30 only three days (02.08, 20.09, 05.10) are identified as predictable. Then, we divide days into two groups after aggregating by a=30 using the test-statisticsB derived from entropy estimation from Eq. <ref>. There are 14 predictable days, the results are given in Table <ref> in Appendix Section <ref>. Even after aggregation by 30 transactions, the predictable days have a higher probability of repeating the direction of the price. They are also characterized by the larger difference in frequencies of increasing and decreasing of price. To mitigate the difference, the median of price returns could be used instead of zero during the discretization process by Eq. <ref>. Further, we use NP-statistics D that takes into consideration the difference between frequencies of symbols by design.For the Microsoft stock, we discover 26 predictable days after aggregating data by a=10 transactions. According to the test for the difference in mean value of (p̂(0…0)+p̂(1…1))2^k, the price direction is more persistent for predictable days. With a slightly larger aggregation level of a=15, this difference is eliminated and only four predictable days remain. These days are listed in Table <ref> in Appendix Section <ref>.For the stock TSLA, we identify 35 predictable days with aggregation a=25. On these days, we observe a higher number of price changes, a stronger autocorrelation of non-zero returns, and increased probabilities of blocks repeating a symbol. Increasing the aggregation level by 5, we observe that the probability of repeating a symbol is significantly high even when a=60. However, by further increasing the aggregation level to a=65, only 4 predictable days remained (05.08, 14.09, 15.09, 08.11), and the previously significant differences disappeared. Regarding the influence of news on price predictability, it's worth noting that on August 5, Tesla shareholders approved a 3-1 stock split. On September 14, Tesla faced a lawsuit for false advertising of its autopilot technology. Additionally, on November 8, Tesla recalled over 40 thousand cars[The news are taken from edition.cnn.com and reuters.com.]. These instances of significant events or news might be associated with predictable price behavior at high frequencies. Traders tend to react more actively to new information, leading to increased trading activity reflected in price changes and higher trading volumes. In general, high trading activity observed during predictable days across all considered assets suggests a possible connection between predictability and the arrival of news or significant events. For a more in-depth exploration of how prices react to news, we refer to review <cit.>.For the stock INTC, we identify 44 predictable days using the aggregation level of a=2 transactions. These days notably differ in terms of daily changes and autocorrelation of returns. However, as we increase the aggregation level to a=7 transactions, only 8 predictable days remained, characterized by significantly higher amounts of non-zero returns. At an aggregation level of a=10, the only predictable day is October 13. For the stock LLY, the set of different characteristics is similar to the case of INTC when a=5. At a higher aggregation level of a=10 transactions, only three predictable days (03.08, 29.09, 28.10) are found with significantly high probability of repeating symbols.For the stock of Snapchat, we find 53 predictable days without employing aggregation. These predictable days do not exhibit a large probability of repeating symbols. The distinguishing features of predictable days are a larger fraction of 0-returns, an increased mean return, a high autocorrelation of returns and their absolute values, and a smaller scale of fitted t-distribution. The case of the stock CCL is the only one where the percentage of price jumps is significant and higher for predictable days. We present the results for the stock CCL in Table <ref> in Appendix Section <ref>. Last, we consider the price of the ETF SPY with a=5 in Table <ref> in Appendix Section <ref>. We discover that price returns of predictable days have fatter tails than returns of unpredictable days. Using the aggregation level a=10, we remain with 9 predictable days presented in Table <ref> with no significant difference in probabilities of repeating symbols. §.§ Predictability of pairs of signsFor the majority of considered assets, we find that predictability is associated with a high frequency of blocks with the same price direction. However, for three stocks, SNAP, F, CCL, we fail to detect the high frequency of blocks 0…0 and 1…1 without aggregation of prices. Here, we consider the three stocks to test if a price direction depends on the previous recorded decrease or increase. In other words, we test ℋ_0 setting k=2 and estimate p(00)+p(11) for predictable and not predictable days. A summary table is Table <ref>. For all three stocks we detect a significant difference in p̂(00)+p̂(11) for the two groups of days. However, the sum of two probabilities is less than 0.5 for predictable days. That is, the probability of changing price direction is significantly high for predictable days. Therefore, predictable days are more likely to have a pattern of changing symbols indicating an increase or decrease in price. The number of predictable days is fewer when considering k=2compared to the value of k=[0.5log_sn], where n represents the length of a symbolic sequence. As the number of symbols used to test dependence on past history increases, more days exhibit predictability. The probability of repeating the same symbol is approximately 0.5 for unpredictable days, whereas it is notably smaller for predictable days. However, this characteristic of predictable days for the three stocks diminishes as k increases to around 5 or 6 as indicated in Table <ref>. That is, a pattern of switching direction with each price movement tends to last for less than 5 price changes on average. §.§ Localization of predictable intervalsIn this section, we consider the length of the interval used to detect predictability. In previous sections, we investigate daily time intervals. The key question here is whether there exists a smaller time interval within a predictable day where significant predictability can be identified. The motivation for searching for the smaller interval is to localize the period when price predictability occurs. Additionally, using a smaller time interval necessitates less historical data for computing entropy-based estimations. Therefore, employing a smaller rolling window enables quicker detection of price predictability.Since rolling window inside a trading day implies multiple tests, the Šidák correction <cit.> of the significance level is used. Moreover, to ensure independence among the conducted tests, non-overlapping intervals within a trading day are considered. The maximum value of considered partitions is S_max=⌊(n-k+1)/1000⌋, so that 1000 is the minimum length of intervals. For each trading day, we aim to detect predictability for at least 1 from S partitions with significance level 1-0.99^1/S. We record the maximum value of such S≤ S_max. Smaller intervals require a smaller k for analysis.We present the results in Table <ref> in Appendix Section <ref>.There are three days for AAPL stock where predictability is detected when the data is aggregated. For all three days predictability is detected only for daily time intervals (S=1). For the MSFT stock, there are three days where predictability is detected only for the part of the day (S>1). Regularity patterns are detected at the end of the day for August 3 and October 26. For August 5, predictability is presented for the first half of the day. It disappears at the next subsequent non-overlapping interval. Regarding TSLA stock, detectable predictability was identified in both halves of the day on August 5, indicating the potential to reduce the sequence length for predictability analysis by half and still observe significant patterns throughout the day. For the stock SNAP, we detect days when predictable time intervals occur several times during a day. We notice that some predictable intervals cluster together and highlight such intervals in the Table <ref>. For instance, we observe 7 predictable intervals going in a row on October 21 and 24. For 13 out of 30 days where S_max>1 for the Ford stock, the predictability can be found in the second half of a day. Similarly, there are 14 days in the sample of 31 days with S_max>1 where the predictability is detected only in the second part of a trading day for CCL. For each predictable day of the ETF SPY, there is only one from S intervals where predictability is found. Predictability disappears from the time interval to the next subsequent non-overlapping interval. For instance, predictability was noticeable in the middle of the trading day on August 17.§ DISCUSSION AND CONCLUSIONSWe have applied a statistical test for the randomness of a symbolized sequence. A short summary of the method for detecting predictability is given below. * First, we estimate the Shannon entropy using empirical frequencies of blocks of symbols as suggested in <cit.>. Using empirical frequencies obtained by rolling a window with a certain length, we the calculate the NP-statistics.* The NP-statistics has χ^2-distribution according to <cit.> and <cit.>. We have found degrees of freedom of the χ^2-distribution that depends on the length of blocks and the size of the alphabet.* The statistics is scaled and shifted KL-divergence <cit.> that measures difference between empirical probabilities and theoretical probabilities under the null hypothesis of unpredictability.* The method is computationally fast since it requires only the empirical frequencies of blocks of symbols. We have studied the predictability of asset prices at ultra-high frequency. Considering signs of price changes, we construct binary sequences for all recorded executed transactions. The signs of trades have a long memory in such a microscopic view of transaction data <cit.>. We have shown that the degree of predictability decreases with the increase of aggregation level.We apply aggregation by the number of transactions and work with transaction time. Transaction times have an uneven time intervals in seconds, which has been empirically shown by <cit.>. We have shown that the significant predictability level decreasing with larger aggregation level can be explained by splitting hidden orders into pieces as modeled by <cit.> or by mean reverting as modeled in <cit.>. According to <cit.>, such type of predictability does not lead to arbitrage opportunities because it is compensated by fluctuations in transaction costs and liquidity and by interaction between market makers and informed traders. We have also demonstrated that transactions appearing simultaneously with the precision of a nanosecond contribute to a larger predictability level.We apply the correction proposed in <cit.> to make multiple tests for predictability for short intervals during predictable days. In most cases, a single predictable interval is identifiable by partitioning a day into uniform intervals based on transaction time. In such a way, we determine both the position of this interval relative to the time of day and its duration. For the stock SNAP, we have found several groups of such predictable intervals following each other.Applying the test, we distinguish predictable days from not predictable days. We have shown that the probability that the price of an asset has several subsequent movements in the same direction is one of the factors affecting the predictability of the prices. For a group of assets, predictable days are characterized by repeating signs of price returns. Except LLY, trades on these assets appear at extremely high frequencies, i.e., less than one second on average. The repetition of price direction is explained by the appearance of news, the reaction to them, and the splitting of one order into parts <cit.>. Conversely, another group of stocks (SNAP, F, CCL) demonstrated a lower percentage of predictable days before aggregation. In these cases, days featuring predictable time intervals were characterized by a significantly reduced probability of price moving in the same direction twice. This group differs by relatively low prices and their standard deviations. The pattern of changing price direction can be explained by a bid-ask bounce and fluctuations of the price around a low mean value. We presume that the occurrence of this behavior depends on the frequency of transactions.For 8 out of 9 assets under consideration, non-zero price returns during predictable days have high autocorrelation. Highly significant coefficients of an AR model for tick-by-tick data were empirically investigated by <cit.> and <cit.>. Some stylized facts of price returns at ultra-high frequency data including fat tails of return distribution and volatility clustering were discussed in <cit.>. To explore fat tails of price returns, we estimate degrees of freedom of the fitted t-distribution of the price returns. For the ETF SPY, we discover that price returns of predictable days have fatter tails than returns recorded during not predictable days. However, we have the opposite result for the stocks of Ford and Carnival Corporation, where not predictable days are described by price returns with fatter tails. We check volatility clustering by measuring the autocorrelation of the absolute values of returns. The autocorrelation is significantly greater during predictable days for the stocks AAPL, SNAP, F, and CCL.We notice that predictable days of AAPL, MSFT, INTC, LLY, F, CCL, and SPY are characterized by larger trading volumes and the larger amount of non-zero price changes in comparison with not predictable days. This distinction in characteristics suggests a correlation between predictability and trading activity, particularly in response to market news events. This assumption aligns with existing research demonstrating the influence of news on stock prices. Stock prices react to announcements about stock dividends and splits <cit.>. Weekly price returns react to attention to news and their tone <cit.>. Public news affect monthly price returns <cit.>.We have presented the approach for testing predictability of financial data. Using this approach, we find days with a statistically significant level of predictability. Aggregating the data so that the time between the transactions under consideration increases, we allocate a smaller group of days with predictability. Various methods exist for aggregating data, including calendar time, transaction time, and tick time.Alternative approaches involve counting volumes traded and price changes by a certain amount. A direction for future exploration involves comparing the predictability observed in data aggregated through diverse methodologies.§ ASYMPTOTIC DISTRIBUTION OF THE NEYMAN-PEARSON STATISTICSWe outline the proof of Lemma 1. The lemma gives the asymptotic distribution of the statistics D in Eq. <ref> based on empirical frequencies of blocks of symbols of a sequence. If symbols of sequence X={x_t}_t=1^n are independently distributed, then the sequence X̅ of blocksx̅_t={x_t,x_t+1,…,x_t+k-2} follows the first order Markov process. The transition matrix of the Markov chain has dimension M× M, where M=s^k-1. However, there are only s non-zero probabilities whose sum is equal to one in each row of the matrix. Thus, the transition matrix can be converted into M× s matrix with only positive entries. This transformation is feasible since an output x̅_t in sequence X̅ differs from x̅_t-1solely by a single new symbol x_t+k-2∈ A.We denote by p_ij transition probability from the block of symbols i ∈{0,1,…,M-1} to the block with the last symbol j∈ A. The null hypothesis ℋ_0 can be restated as p_ij=p_j, indicating that the probability of obtaining a new symbol does not rely on the previous k-1 symbols. Thus, the hypothesis ℋ_0 assumes the transition matrix with the following structure. T= [ p_0 p_1 … p_s-1; p_0 p_1 … p_s-1; ⋮ ⋮ ⋱ ⋮; p_0 p_1 … p_s-1 ] The asymptotic distribution of the Neyman-Pearson statistics (Eq. <ref>) is χ^2-distribution that can be shown using a characteristic function <cit.>. There are s probabilities with the sum of 1 in each of M rows in the matrix T and it contains s parameters with one constraint. Therefore, the degrees of freedom of the NP-statistics are M(s-1)-(s-1)=(M-1)(s-1) according to Theorem 4.1 of <cit.>.Then, we empirically show that the entropy bias (Eq.<ref>) and the NP-statistics (Eq.<ref>) follow χ^2-distribution with s^k-1 and (s^k-1-1)(s-1), respectively. As defined in the main text, s is the size of the alphabet and k is the length of blocks. We consider alphabets with 2,3,4 symbols and take sequences with three different lengths, log_10n=2,3,4. The length of blocks depends on the sequence length, k=[0.5log_sn]. For each plot we simulate N=10^5 sequences. We provide the QQ-plot for entropy bias in Figure <ref>.The next two Figures show the QQ plots for calculated NP-statistics. For Figure <ref>, sequences have equal probabilities of appearing symbols. In contrast, Figure <ref> displays the scenario where the probability of symbol 0 is twice as great as the probabilities of other symbols. These figures demonstrate the convergence of the empirical distribution of D to the theoretical χ^2-distribution.§ STATISTICS AND PARTITIONS OF PREDICTABLE DAYSThis section comprises results pertaining to statistics derived from predictable and not predictable days.The results for the stock AAPL using entropy bias are in Table <ref>. Detailed results for the stock CCL and ETF SPY are provided in Tables <ref> and <ref>, respectively. Finally, Table <ref> shows partitions based on the test of randomness conducted for intervals of predictable days. unsrt | http://arxiv.org/abs/2312.16637v2 | {
"authors": [
"Andrey Shternshis",
"Stefano Marmi"
],
"categories": [
"q-fin.ST",
"q-fin.CP"
],
"primary_category": "q-fin.ST",
"published": "20231227164807",
"title": "Price predictability at ultra-high frequency: Entropy-based randomness test"
} |
[email protected] School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China School of Physical Sciences,University of Chinese Academy of Sciences,No. 19A Yuquan Road, Beijing 100049, Chinacorresponding author: [email protected] School of Physical Sciences,University of Chinese Academy of Sciences,No. 19A Yuquan Road, Beijing 100049, China CAS Key Laboratory of Theoretical Physics,Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190, China corresponding author: [email protected] School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China School of Physical Sciences,University of Chinese Academy of Sciences,No. 19A Yuquan Road, Beijing 100049, China CAS Key Laboratory of Theoretical Physics,Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190, ChinaUltralight bosons are attractive dark-matter candidates and appear in various scenarios beyond standard model. They can induce superradiant instabilities around spinning black holes (BHs), extracting the energy and angular momentum from BHs, and then dissipated through monochromatic gravitational radiation, which become promising sources of gravitational wave detectors. In this letter, we focus on massive tensor fields coupled to BHs and compute the stochastic gravitational wave backgrounds emitted by these sources. We then undertake a search for this background within the data from LIGO/Virgo O1∼ O3 runs. Our analysis reveals no discernible evidence of such signals, allowing us to impose stringent limits on the mass range of tensor bosons. Specifically, we exclude the existence of tensor bosons with masses ranging from 4.0×10^-14 to2.0×10^-12 eV at 95% confidence level. Probing Ultralight Tensor Dark Matter with the Stochastic Gravitational-Wave Background from Advanced LIGO and Virgo's First Three Observing Runs Qing-Guo Huang January 14, 2024 =================================================================================================================================================Introduction. The detection of gravitational waves (GWs) from the merger of binary black holes (BHs) <cit.> and neutron stars <cit.> ushered in a new era of GW astronomy. Nowadays, GWs have been used to test our basic understanding of different aspects of fundamental physics <cit.>. Especially, it provides entirely new avenues for the detection and constraints ofultralight bosons<cit.>, such as spin-0 QCD axions, axion-like particles in string axiverse and spin-1 dark photons <cit.>, which are important candidates of dark matter (DM). The coupling of these fields to Standard Model particles is very weak, making direct detection in the laboratory difficult <cit.>. However, if the ultralight bosonic field with mass m_b exists around a spinning BH, it can trigger superradiant instability if its typical oscillation frequency ω_R=m_b c^2/ħ satisfies 0<ω_R<mΩ_H, where c is speed of light, m is the azimuthal quantum number of the unstable mode and Ω_H ishorizon angular velocity of the BH (see <cit.> for a review). This mechanism results in the transfer of angular momentum and energy from the BH to the bosonic field. Ultimately, when ω_R∼ m Ω_H, superradiant instability suppresses andcorotating and non-axisymmetric boson condensate forms. The condensate is dissipated through the emission of almost monochromatic GWs with frequency f_0∼ω_R/π <cit.>. Thus, it leads to a wealth ofobservable astrophysical phenomena <cit.>, which have aroused much attention to probe ultralight bosons or the nature of BHs recently<cit.>. Until now, there have been no positive results from direct detection of such nearly monochromatic GW signals <cit.>. Another effective method to constrain the parameters of ultralight bosons is to exploit the potential abundance of faint signals in the universe, that is, to search for stochastic GW background (SGWB) they generate. From an astronomical view, this background is generated by all the unresolved sources in Universe. During O1∼ O3 observing runs of LIGO and Virgo, the upper intensity of SGWB has been set <cit.>. The SGWBfrom BH-boson system was calculated by <cit.>. So far, there have been studies utilizing SGWB to constrain the parameters of ultralight bosons <cit.>. The authors of <cit.> and <cit.>provide the first search of SGWB in the first observing run of LIGO sourced by scalar and vector boson, respectively. However, the parameter constraints for ultralight tensor dark matter<cit.> have not been thoroughly investigated as extensively as scalar and vector dark matter <cit.>.In this letter, we explore the impact of ultralight tensor dark matter, which can be described by linearized ghost-free bimetric theory, and calculate the energy spectra of corresponding SGWB. Bayesian framework is adopted to search for such signal in LIGO and Virgo's observing period. From now on we use the units G=c=ħ=1. Superradiant instability of ultralight tensor dark matter.The dynamics of tensor dark matter can be described by spin-2 field H_αβ on BH background with initial mass M_i and initial dimensionless angular momentum χ_i=J_i / M_i^2. This field can be regarded as linear massive tensor perturbation in the ghost-free bi-metric gravity theory <cit.>, which equation of motion on Ricci-flat background reads:□ H_αβ+2 R_αγβδ H^γδ-m_b^2 H_αβ=0,∇^α H_αβ=0,H^α_α=0,where the box operator, Riemann tensor R_αγβδ, contractions are constructed with the background metric, and m_b is the mass of the spin-2 field.For ultralight (m_b M≪ 1) tensor dark matter, there are several different mechanisms for triggering instability. There exists unstable monopolar (m=0) modes, which are also existed in the spherical BH case. Such instabilities appear first, affecting the spacetime geometry of BHs and eventually forming hairy BHs <cit.>. This instability has an extremely short characteristic timescale (τ_mono∼ m_b^-1) <cit.>, and due to its monopolarity, no GW produces during this process. Therefore, the impact of this process can be ignored. And subsequent superradiant instability leads to the formation of the quasi-bound state, which characteristic frequency is complex <cit.>:ω_R/m_b ≃ 1-(m_b M)^2/2(ℓ+n+S+1)^2,ω_I∝ (m_b M)^4 ℓ+5+2 S(ω_R-m Ω_H),where ℓ≥ 0, n ≥ 0, and the integer S=(0, ± 1, ± 2) are the mode total angular momentum, overtone number, and polarization with m ∈[-ℓ, ℓ]. We use the result on Kerr background, because these modes are significant support at r∼ M (m_b M)^-2 <cit.>, the deformation compared to the Kerr BH for the hairy BH is therefore negligible under ultralight limit. It is clearly that for m≥ 1 cases, if ω_R-m Ω_H<0, these states would grow exponentially until reaching the saturation point, ω_R ∼ m Ω_H in timescale τ_inst ≡ 1 / ω_I. In this letter, we concentrate on two dominant unstable appearing in non-relativistic approximation, a dipole mode (l=m=1,S=0) and a quadrapole mode (l=m=2,S=-2). Although they share the comparable timescale, it has been shown <cit.> that the dipole mode can be reabsorbed by the BH, giving back almost all its mass and spin. The loss of mass and angular momentum is thus almost entirely attributable to the quadrapole mode itself. According to the conservation of mass and angular momentum, we can get the relationship between initial BH mass M_i, spin J_i and the mass M_f and spin J_f when superradiant instability saturated at ω_R=m Ω_H:J_f=J_i-m/ω_R(M_i-M_f)=J_i-m/ω_R M_T^max , M_f=m^3-√(m^6-16 m^2 ω_R^2(m M_i-ω_R J_i)^2)/8 ω_R^2(m M_i-ω_R J_i),where M_T^max is the maximal total mass of tensor dark matter. This quasi-bound state dissipates its energy by GWs in a typical timescale τ_GW:τ_GW=M_f(d Ẽ/d tM_T^max/M_f)^-1,where d Ẽ/d t is the reduced GW flux. In the qradrapole case, it reads <cit.>τ_GW∼ 290(M_f/30)^2 (3/M_T^max)(m_b M_f/0.2)^-10 sec.Since the hierarchy of timescales τ_GW≫τ_inst ≫ M, we can get the total GW energy of the whole system emitted in a duration time Δ t=t_0-t_f by quasi-adiabatic approximation <cit.>E_GW=M_T^maxΔ t/Δ t + τ_GW,where t_0 is the age of the Universe and t_f characterizes the time when the hairy BH is formed.SGWB from boson clouds. In this section, we aim to model the SGWB from the superradiant instability. As described in eq:defomega <cit.>Ω_GW(f)=1/ρ_cρ_GW/ln f,it is characterized by the energy density per logarithm frequency and ρ_c is the critical energy density required for a spatially flat Universe. To figure out this background, we need to sum the GWs radiated by all the possible sources. In our work, it is the superposition of all the individual BH-boson system triggering superradiance. The general formula isΩ_GW(f)=f/ρ_c∫dzdt/dz∫dθ ℛ(θ;z)dE_s/df_s(θ;m_b,f),where t/ z is the derivative of lookback time over redshift. It is determined by the ΛCDM cosmology model <cit.>:t/ z=1/(1+z)H_0√(Ω_M (1+z)^3+Ω_Λ).ℛ(θ;z) is the formation rate of BH with parameter θ per comoving volume at redshift z. E_s/ f_s is the energy spectrum of a single event in the source frame. Since the GW radiation is nearly monochromatic, the spectrum is well approximated byE_s/ f_s=E_GWδ(f_s-f_0)=E_GWδ[f(1+z)-f_0],where f_0=ω_R/π and we replace the source-frame frequency f_s by detected frequency due to the redshift of the sources.Two kinds of BH populations are considered in our work, which are isolated stellar origin BHs (SBHs) and binary SBH merging remnants. For isolated BH channel, the formation rate isℛ^iso(χ, M; z)= p(χ)R^iso(M;z).By writing down eq:mathcalR_iso, we have separate the relation of BH spin since its distribution can not be derived from theory and here we adopt a uniform prior distribution: p(χ) = 1/χ_u-χ_l, χ_l≤χ≤χ_u 0,otherwise. The remaining part can be determined by astrophysical model R^iso(M;z) = ψ(z_f)ξ(M_*)/M_* M_*/ M,where M_* is the mass of progenitor star. Its population is the product of star formation rate and initial mass function. We adopt the Salpeter initial mass function <cit.>, which is a power law ξ(M_*)∝ M^-1.35 and normalized between 0.1∼100. For star formation rate, we use a model proposed in <cit.>ψ(z)=νaexp[b(z-z_m)]/a-b+bexp[a(z-z_m)].As suggested in <cit.>, the fitting parameters are taken to be ν=0.178 /yr/Mpc^3, a=2.37, b=1.80 and z_m=2.00. z_f is the redshift at progenitor birth, so the corresponding lookback time satisfies t_f=t-τ(M_*), where τ(M_*) denotes the lifetime of star and we calculate this quantity based on <cit.>. Besides, to determine the derivative item in eq:R_iso, we use the relation of mass between BH and progenitor star metalliticity in <cit.>.For merging remnant channel, ℛ^rem(m_1, m_2; z)=p(m_1, m_2)R^rem(m_1,m_2;z).The adopted mass distribution of binary system is a broken power law model <cit.>. That isp(m_1, m_2) ∝ m_1^-1.58, m_mim≤ m_1<m_*m_1^-5.59, m_*≤ m_1<m_max0,otherwisem_min=3.96,m_max=87.14,m_*=m_min+0.43(m_max-m_min).R^rem appeared in eq:R_rem is the binary merger rate. It is the convolution of the binary formation rate with the distribution of time delays between formation and merger:R^rem(m_1,m_2;z)=∫_t_min^t_max R_f(m_1, m_2;z_f)p(t_d)t_d.Following <cit.>, we have assumed that R_f(m_1,m_2;z_f) is proportional to a metallicity-weighted star formation rate. The parameters of weighting rectification are the same as <cit.> and the time distribution is taken as p(t_d)∝ t_d^-1 with 50 Myr≤ t_d≤13.5 Gyr. Numerical Relativity suggests that the mass and spin of BH remnant are <cit.>M= m_1+m_2-(m_1+m_2)[(1-√(8/9))ν=-4ν^2(0.19308+√(8/9)-1)],χ = ν(2√(3)-3.5171ν+2.5763ν^2).The local merger rate is normalized as <cit.>∫ℛ^rem(m;0) m = 19 Gpc^-3yr^-1.Add the contributions from these two kinds of BH populations up and we work out the actual background. Typical spectra within LIGO's frequency band are shown in fig:spectrum. We show the spectra of these two channels separately. The background is dominated by isolated BH population. Its amplitude is about 4 orders of magnitude larger than BBH remnant channel.Data analysis and results. SGWB will cause a correlation in the output of detector pairs. An estimator can be constructed with Fourier transform of the strain data as follows <cit.>Ĉ_IJ(f)=2/TRe[s̃_I^*(f)s̃_J(f)]/γ_IJ(f)S_0(f),where S_0(f)=(3H_0^2)/(10π^2f^3). γ_IJ(f) is the overlap reduction function <cit.> of detector pair IJ and T denotes observing time. eq:C_IJ has been normalized to ⟨Ĉ_IJ(f) ⟩=Ω_GW(f) and the variance isσ_IJ^2(f) = 1/2TΔ fP_I(f)P_J(f)/γ_IJ^2(f)S_0^2(f).P_I,J(f) appeared in eq:sigma are the power spectral densities of the interferometers. To evaluate the mass of spin-2 boson, we perform Bayesian analysis. After glitch gating and non-stationary rejection, the noise of detectors is well approximated by uncorrelated Gaussian signal. Therefore, the total likelihood is p(Ĉ|θ)∝exp[-∑_IJ∑_f(Ĉ_IJ(f)-Ω(f;θ))^2/2σ^2_IJ(f)],where θ denotes a sequence of parameters to be determined by analysis and we have multiplied the likelihood from all the frequency bins and detector pairs. The ratio of model evidenceℬ_ noise^ model=p(Ĉ | Model of signal)/p(Ĉ | Pure noise),so-called Bayes factor illustrates the statistical significance of the presence of the signal. We set a log-uniform prior between 10^-14∼10^-11 eV to the mass of tensor field. To figure out the influence of spin distribution, we have assumed three different combinations of χ_l,u and treated their energy spectrum separately. As a result, the logarithm of Bayes factors are -0.29∼-0.26, indicating that there are no such signals in the observing periods. The probability distributions of the mass and 95% exclusion ranges by assuming a uniform distribution of spin between [0, 1], [0, 0.5] and [0.5, 1] are displayed in fig:hist. The corresponding excluded range of log (m_b/eV) is around -13.4∼-11.7 and it does not a show strong connection with the spin assumptions we adopted. Compared to scalar and vector fields, the observing data puts a stronger constraint on the mass of tensor field. This is within our expectations as tensor fieldhas a shorter timescale of superradiant instability comparing with scalar and vector fields with same mass <cit.>. Discussions and conclusions.In this work, we search for the SGWB signal produced by ultralight tensor dark matter around both isolated SBHs andbinary SBHs remnants in LIGO/Virgo O1∼ O3 runs. We find no evidence to suggest the presence of the signal then we place constraints on the mass of tensor bosons. Assuming a uniform distribution of spin between [0, 1], [0, 0.5] and [0.5, 1] for isolated SBHs, we exclude the mass range 10^-13.4∼10^-11.7eV. The relationship between the excluded mass range and the spin distribution is not very tightly connected.It is noteworthy that we notice recent advances in the purely numerical spectral analysis of ultralight tensor dark matter <cit.>. These studies contain several new modes beyond the hydrogenic approximation we used in the work. Since these modes stem from nonseparability of the field equations, their dynamical impact becomes very difficult to compute. This aspect remains a subject for future investigations and warrants careful consideration in subsequent research endeavors. Acknowledgements.We thank Yuan Chen for useful discussions. QGH is supported by the grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15). We acknowledge the use of HPC Cluster of ITP-CAS. | http://arxiv.org/abs/2312.16435v1 | {
"authors": [
"Rong-Zhen Guo",
"Yang Jiang",
"Qing-Guo Huang"
],
"categories": [
"astro-ph.CO",
"gr-qc",
"hep-ph",
"hep-th"
],
"primary_category": "astro-ph.CO",
"published": "20231227064347",
"title": "Probing Ultralight Tensor Dark Matter with the Stochastic Gravitational-Wave Background from Advanced LIGO and Virgo's First Three Observing Runs"
} |
Even grade generic skew-symmetric matrix polynomials with bounded rank [======================================================================We study parabolic double cosets in a Coxeter systemby decomposing them into atom(ic coset)s, a generalization of simple reflections introduced in a joint work with Elias, Libedinsky, Patimo.We define and classify braid relations between compositions of atoms and prove a Matsumoto theorem. Our consideration of reduced compositions of atoms gives rise to a new combinatorial structure, which is equipped with a length function and a Bruhat orderand is realized as Tits cone intersections in the sense of Iyama-Wemyss. § INTRODUCTION Let (W,S) be a Coxeter system. For a subset J⊂ S we denote by W_J the parabolic subgroup generated by J. Recall (or see <cit.>) that the associated Coxeter complex (W,S) consists of cells (simplices)indexed by parabolic left cosets, so that the cell C_wW_J corresponding to wW_J, where w∈ W and J⊂ S, has dimension |S∖ J|-1. In particular, the maximal cells in (W,S) are indexed by the one-element cosets wW_∅ = {w}.The Coxeter group W acts naturally on the left cosets in (W,S), thus on the cells in (W,S).Let us fix a subset I⊂ S and consider the subcomplex (W,S,I) in (W,S) consisting of the cells fixed by the action of every s∈ I.Then a cell C_wW_J in (W,S) is contained in (W,S,I) if and only if the (left) action of W_I on wW_J is trivial, and the latter is if and only if wW_J=W_IwW_J. Thus, given I⊂ S, we can relabel the cells in (W,S,I) by such double cosets W_IwW_J. The maximal cells in (W,S,I) are then indexed by what we callthe core (W_I,W_J)-cosets (where J⊂ S varies):A parabolic double coset W_I w W_J is called a core coset if W_Iw = W_IwW_J = wW_J as sets.Here is an illustration of the above discussions in an example. Consider the affine symmetric group (S_2,{s,t,u}). can I do it? A reduced expression w=s_1⋯ s_ℓ in (W,S) is read from the left to right to give the sequence[e,s_1,s_1s_2,… ,s_1s_1⋯ s_ℓ]. We view the sequence as a path going through the coressponding maximal cells in (W,S). Here e∈ W denotes the identity element in the group W and marks the starting point of the path. One observes that, when moving from one maximal cell to the next, the path actually goes through the (codimension one) face shared by the two maximal cells:[{e}⊂{e,s_1}⊃{s_1}⊂{s_1,s_1s_2}⊃{s_1s_2}⊂⋯ ⊃{w}] =[e, eW_s_1, s_1, s_1W_s_2, s_1s_2 ,⋯ , w].Note that in every `⊃' we make a choice of going forward: take the maximal element from the previous step, e.g., take s_1s_2 from {s_1,s_1s_2}=s_1W_s_2.To record (<ref>), it is therefore enough to give the following list of parabolic subsets.[∅,{s_1},∅,{s_2},∅,⋯,{s_ℓ},∅] A main idea of singular Coxeter combinatorics (see <cit.> and <cit.>) is that a path can go through even smaller cells, and that a path can end not only at some {w} but at any wW_J. More generally, a path can start at W_I=W_I e W_I instead of at {e}=W_∅ e W_∅, in which case our path is a sequence of double cosets rather than of left cosets. In this generality, a (singular) expression in (W,S) is a sequenceM_∙ = [[I_0 ⊂ K_1 ⊃ I_1 ⊂ K_2 ⊃…⊂ K_m ⊃ I_m]].where I_i,K_i⊂ S are finitary subsets, i.e., W_I_i,W_K_i are finite. The finitary restriction is necessary when assigning a path to (<ref>): denoting the unique Bruhat maximal element of a finite double coset q by q, the (forward) path for (<ref>) isp_∙=[[p_0,p_1,⋯,p_2m]]where p_2i+1 = p_2iW_K_i and p_2i = p_2i-1W_I_iHere p_2i+1 is an (I_0,K_i)-coset and p_2i is an (I_0,I_i)-coset, and by (I,J)-coset we mean (W_I,W_J)-double coset; see Section <ref> for more details. The path p_∙ ends at the (I_0,I_m)-coset p_2m. Thus, analogously to the regular case [To be more precise, the singular expressions are not analogous to the Coxeter group W but to the Coxeter monoid (W,*) (see Section <ref>). For reduced expressions, there is no difference between the two variations.], the expression (<ref>) expresses the double coset p_2m. We write in this casep_2m [[I_0 ⊂ K_1 ⊃ I_1 ⊂ K_2 ⊃…⊂ K_m ⊃ I_m]]. The notion of reduced expressions (resp., reduced paths) for double cosets is introduced also in Williamson <cit.> (see <cit.>, or Definition <ref> and <cit.> for a singlestep formulation; a reduced path is a forward path associated to a reduced expression in the sense of the previous paragraph). A joint work with Elias <cit.> introduces and classifies the braid relations for double cosets and proves Matsumoto's theorem, i.e., that any two reduced expressions of the same double coset are related by a series of braid relations.Now suppose p is a core (I,J)-coset, for some finitary subsets I,J⊂ S.Then C_p is a maximal cell in (W,S,I). It is thus natural to ask whether there exist |I|-regular (expressions and) paths in (W,S,I)that terminates at p. By an r-regular expression we mean an expression (<ref>) such that |I_i|=r and |K_i|=r+1 hold, exactly like a regular expression (<ref>) does. Itspath alternates between two types of moves:* from a maximal cell C_W_IwW_I_i to its s_i+1-face C_W_IwW_K_i+1 (whereK_i+1=I_i⊔{s_i+1}),* from a face C_W_IwW_K_i to the adjacent maximal cell C_W_IyW_I_i where y=W_IwW_K_i,from a maximal cell to its codimension one face and from a codimension one face to a maximal cell, exactly like a regular path (<ref>) does. The answer is positive and appears in a joint work with Elias-Libedinsky-Patimo <cit.> (see Proposition <ref>). In fact, <cit.> shows that there is a reduced expression with the above condition and calls it anatomic reduced expression. To explan the terminology, let us start with atoms.An atom (or an atomic coset) isa core coset of the form W_K∖ sw_KW_K∖ t, where K⊂ S is finitary and s,t∈ K. Here w_K denotes the maximal element in W_K, and it is straightforward that the core coset condition implies t=w_Ksw_K.A further justification of the definition (in addition to that they are the simplest core cosets) is given by the fact that an atom has a unique reduced expression (see Proposition <ref>).An atomic (reduced) expression then refers to a (reduced) composition of atoms, identifying the atoms with their unique reduced expressions. For example, if =W_Iw_KW_Jand = W_Jw_LW_M are atoms, thentheir unique reduced expressions are[[I⊂ K⊃ J]], [[J⊂ L⊃ M]]and we have∘ [[I⊂ K⊃ J⊂ L⊃ M]].Thus the forementioned result of <cit.> says that a core coset decomposes as a reduced composition of atoms. In all above senses, an atom, in the study of (core) double cosets, plays the same role as what a simple reflection s∈ S plays in Coxeter combinatorics. The purpose of the current work is to understand the relations between atomic expressions. For reduced expressions, we achieve this in the atomic Matsumoto theorem: Two atomic reduced expressions of the same core coset are related by atomic braid relations. We refer to Definition <ref> for its definition, but an atomic braid relation is of the form∘' ∘”⋯_m = ∘' ∘”⋯_m ,where , are atoms; the atoms ',”,⋯ and',”,⋯ are certain twists of the atomsand , respectively, determined by their position in the composition; and m=m_,≥ 2 is an integer (in general not equal to one of the entries in the Coxeter matrix for (W,S)). See also Proposition <ref> and Proposition <ref>.We point out a key feature of our braid relations, a feature the braid relations from <cit.> do not enjoy: the two sides of (<ref>) are of the same width. Due to this feature, the combinatorics of atomic reduced expressions very much resembles that of regular reduced expressions in a Coxeter group, bringing us a substantial computational and pedagogical advantage over the presentation in <cit.>.The non-reduced (atomic or not) expressions depend on different natural ways of composing double cosets in a non-reduced way. We consider two of them, namely the singular Coxeter monoidstudied in <cit.> (see Section <ref>) and the nilCoxeter algebroidstudied in <cit.> (see Section <ref>).As is implicit above, we use the adjective`singular' to mean `of parabolic double cosets'.In particularis the double coset analogue of the Coxeter monoid, andis that of the nilCoxeter algebra (the name comes frombeing equivalent to the category of singular Demazure operators, i.e., Demazure operators associated to parabolic double cosets), and they differ only in the quadratic relations, namely, ss=s in the Coxeter monoid and ss=0 in the nilCoxeter algebra. As can be guessed from the quadratic relation, it is easier to deal with non-reduced expressions in : they are equal to zero. For this category we obtain a presentation theorem Theorem <ref>. We restricted ourselves to core double cosets in the above. Let us justify. Let p be a (I,J)-coset. The left (resp. right) redundancy of p is the subset(p) = I∩pJp⊂ I,resp.,(p) = p Ip∩ J⊂ J,where p denotes the unique minimal element in p (see Section <ref> for more details). We have W_(p) = W_I∩pW_Jp (see e.g., <cit.>), which says that the redundancy encodes the redundancy in combining the left action and the right action of W_I and W_J.In particular, the coset p is core if and only if I=(p) and J=(p).Then a result from <cit.> (see Proposition <ref>) says that p^=W_(p)pW_(p) is a core ((p),(p))-coset and our coset p has a reduced expression of the form [[I⊃(p)]]∘ M_∙∘ [[(p)⊂ J]]where M_∙ is a reduced expression for p^. This allows us to reduce problems on arbitrary double cosets to that on core cosets. For example, if an (I,J)-coset p corresponds to a cell in (W,S,I), then we have (p)=I, (p)=p Ip⊂ J, and p^=W_IpW_(p). The cell p is not maximal in (W,S,I) if and only if the last inclusion (p)⊂ J is strict. In this case p is a face of the maximal cell p^ in (W,S,I). Therefore, if we take a reduced path p_∙ in (W,S,I) for p^ then we obtain such a path for p by adding to p_∙ the final extra step of restricting to a face. This extra step is the postcomposition -∘ [[(p)⊂ J]] in (<ref>).Now that we have given purely combinatorial motivations for our work, let us explain our original motivations from the study of Hecke categories.In the forementioned joint work <cit.> with Elias, Libedinsky, Patimo, we construct a cellular-like basis of the singular Hecke category (also known as the singular Soergel bimodules <cit.>) associated to (W,S). The latter is a categorification of the Hecke algebroid and is the double coset analogue of the regular Hecke category (Soergel bimodules). In constructing our basis elements, called light leaves, a crucial step is to find desirable (with conditions depending on the situation) reduced expressions of double cosets. The result of <cit.> providing reduced expressions of the form (<ref>) is one main ingredient for the step. Another important ingredient is to find a reduced expression M_∙ of a core coset. In type A and type B, as proved in <cit.>, the atomic reduced expressions are exactly the same as the regular reduced expressions in type A and type B respectively, and thus finding M_∙ reduces to the well-understood problem of finding a regular reduced expression of an element in a Coxeter group (of type A and B). This vastly simplifies the basis construction in finite and affine types A and B. In fact, the latter is the original motivation of <cit.> for initiating the atomic theory of Coxeter systems. Finally, we mention an unexpected (or, expected but unexpectedly strong; see <cit.>) connection to a work of Iyama-Wemyss <cit.>, discovered in the final stage of the project.<cit.> introduces a new combinaorial structure (W,S,I), associated to a Coxeter system (W,S) and a subset I⊂ S, which they call a Tits cone intersection. They develop combinatorics of Tits cone intersections in affine type ADE to describe the tilting theory for contracted preprojective algebras (certain idempotent summands of preprojective algebras) as well as the noncommutative resolutions for compound Du Val singularities, with eventual applications to birational geometry.And this (W,S,I) corresponds exactly to (W,S,I) discussed above (under the construction of (W,S) as a quotient of the Tits cone; see Section <ref>).In particular, the chambers in (W,S,I) are labeled by the core (I,J)-cosets (see Proposition <ref>). When we explain the connection in Section <ref>, we work in the Tits cone instead of the Coxeter complex for a more convenient comparison to <cit.>, but the explanation is implicit in <cit.> and in our discussion in the introduction. To be more precise, we recall that the Tits cone (W,S)⊂ V^*, associated to (W,S) and its geometric realization V, has cells labelled by the parabolic left cosets.Some of these left cosets are also right cosets, and, as shown in <cit.>, the cells corresponding to such cosets W_Iw=wW_J, where w∈ W and I,J⊂ S, form a subset (W,S,I) of the Tits cone (W,S) which is realized also as the intersection of (W,S) and the s-hyplanes in V^* for all s∈ s. <cit.> calls (W,S,I) the Tits cone intersection and shows that the Now observe that W_Iw=wW_J readily implies W_IwW_J=W_Iw=wW_J is a double coset, and moreover a core double coset.This establishes the bijection between the chambers in (W,S,I) and theSee (the proofs of) Proposition <ref> for more details. This provides (W,S,I) with additional information obtained in the current paper, in particular, the atomic braid relations, the atomic Matsumoto theorem, and a presentation by generators and relations (seeProposition <ref>). Conversely, the results in <cit.> shed a new light on the singular Coxeter monoidand the nilCoxeter algebroid . We do not explore such consequences in the current paper, but content withreferring to <cit.> for a beautifully illustrated classification of the arrangements of (I,J)-core cosets, where I is fixed of order |S|-3 and S is of affine type.§.§ Organization of the paper Section <ref> introduces some algebraic structures commonly associated to a Coxeter system: Section <ref> recalls basic definitions and fix notation; Section <ref> recalls the singular Coxeter monoid ; Section <ref> recalls the nilCoxeter algebroid . Section <ref> introduces core cosets and the atom(ic coset)s and state necessary facts from <cit.>. Section <ref> explains the switchback relations from <cit.> andhow they gives rise to atomic braid relations. Section <ref> proves the atomic Matsumoto theorem. Section <ref> discusses atomic non-braid relations and give a presentation by generators and relations of the nilCoxeter algebroid. In Section <ref> and Section <ref>, we consider the substructures J ofand ^_J ofconsisting of the (I,J)-core cosets and study their combinatorics. Section <ref> completely describe J and ^_J when |S∖ J|=2; Section <ref> discusses the weak Bruhat order on J, resp., ^_J, and computes some examples when |S∖ J|>2. Section <ref> relates our J and ^_J with <cit.>'s Tits cone intersection.§.§ Acknowledgements.The main ideas in the paper came from discussions with Ben Elias, Nicolas Libedinsky and Leonardo Patimo during our joint expedition in the singular land; additional thanks to ELP for agreeing about posting the current paper before its prequel <cit.>. The technical portion of the project benefited from examples of atomic expressions computed on SageMath.PART:Preliminaries § SINGULAR COXETER PRESENTATIONS In this section we collect some basics of Coxeter combinatorics and recall two presentations of parabolic double cosets by generators and relations, from <cit.> and <cit.> respectively. Let (W,S) be a Coxeter system.§.§ Coxeter group preliminaries Given an element x∈ W the (Coxeter) length ℓ(x) is the smallest length of an expression of x∈ W, i.e., the smallest ℓ where x=s_1s_2⋯ s_ℓ for some s_i∈ S. A product xy, for x,y∈ W, is said to be reduced if ℓ(xy)=ℓ(x)+ℓ(y). In this case we write xy=x.y. An expression w=s_1s_2⋯ s_m, where s_1,s_2,⋯,s_m∈ S, is called a reduced expression if every product involved in computing the expression, in any order, is reduced. The latter condition is equivalent to the product(s_1⋯ s_i)s_i+1being reduced for each i=1,⋯, m-1, which also is equivalent to the length m of the expression being the shortest possible, i.e., m=ℓ(x). The left (resp., right) descent set of w∈ W is defined and denoted as(w) = {s∈ S | sw is not reduced}, resp., (w)={s∈ S | ws is not reduced}. A subset I⊂ S is said to be finitary if the parabolic subgroup W_I generated by I is finite. Given finitary I,J⊂ S we consider an (I,J)-coset by which we mean an element in W_I\ W/W_J. The left (resp. right) redundancy of an (I,J)-cosetis (p) = I∩pJp,resp.,(p) = p Ip∩ J.An (I,J)-cosethas a unique maximal element, denoted by p, and a unique minimal element, denoted by p, with respect to the Bruhat order (see, e.g., <cit.>). Howlett's theorem <cit.> says that, when writing p∈ p = W_IpW_J as a product xp y, the choice of elements x∈ W_I, y∈ W_J is unique up to the action of the redundancy, that is, the surjective mapW_I× W_J → p(x,y)↦ xpyinduces the bijections(W_I/W_(p))× W_J→ p andW_I× (W_(p)\ W_J)→ p.<cit.> also says that, if x∈ W_I is a minimal representative in xW_(p) and y∈ W_J, or if x∈ W_I and y∈ W_J is minimal in W_ (p)y, then we have xpy=x.p.y. §.§ Coxeter monoids This subsection briefly introduces the regular and singular Coxeter monoids. We refer to<cit.> for details. The Coxeter monoid (also known as the 0-Hecke monoid or the star monoid) (W,S,*) is defined as the monoid with the following presentation by generators and relations. The generator is the set S; the generating relations are * the *-quadratic relation for s ∈ S s*s = s;* the braid relationfor s,t∈ S with m=m_st<∞s * t * ⋯_m = t * s * ⋯_m .An expression s_1*s_2*⋯ *s_k is reduced if its width k is minimal among that of the expressions of the same element in (W,S,*).If s*t*⋯ *u is a reduced expression in (W,*), then st⋯ u is a reduced expression in the Coxeter group W, and we identify the element w=s*t*⋯ *u∈ (W,*) with the element w=st⋯ u∈ W.This identification import the length function ℓ on the Coxeter group W to the Coxeter monoid (W,*) and justifies the following (abuse of) notation: we write x*y=x.y in (W,*) if ℓ(xy)=ℓ(x)+ℓ(y).The singular Coxeter monoid is the category =(W,S) whose * objects are the finitary subsets I⊂ S;* morphisms, from I to J, are the (I,J)-cosets, i.e., _(I,J) = W_I \ W/W_J;* composition `*', of ∈ W_I \ W/W_J and ∈ W_J \ W/W_K, is given by p* = W_I (p*q) W_K,for finitary subsets I,J,K⊂ S. The categoryis given a presentation by generators and relations in <cit.>.The generators are the double cosets of the form W_IeW_Is and W_IseW_I, where Is:=I⊔ s is our notation and Is⊂ S is finitary. The composition of these generators are expressed in two ways.One way is to list the finitary subsets appearing:[I_0,I_1,I_2,⋯ ,I_m] := (W_I_0eW_I_1)*(W_I_1eW_I_2)*⋯ *(W_I_m-1eW_I_m)The other way is to list the differences between the finitary subsets:[I ±_1 s_1 ±_2 s_2 ⋯±_m s_m] := [I_0,I_1,I_2⋯ ,I_m]Here ±_i∈{+,-} and s_i are such that, if I_is_i=I_i-1 then ±_i = - and if I_i=I_i-1s_i then ±_i = +. We call m the width of the expression (<ref>). (See <cit.> for discussions on the length of an expression, which is not needed for us.) Under notation (<ref>), the generating relations are* the up-up relations[I+s+t][I+t+s] ; * the down-down relations [I-s-t][I-t-s]; * the switchback relations [I +- ][I - u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_d-1+u_d-2 -u_d+u_d-1]; * the *-quadratic relations [I-s+s][I]; where the only condition on I,s,t is that the expressions are well-defined, e.g., s,t∉I and Ist⊂ S is finitary for the up-up relation. The elements u_i∈ I are explained in Section <ref> in detail (see in particular Definition <ref> where we call u_∙ the rotation sequence associated to (I,,)). The first three classes of relations are called the braid relations.For finitary I,J,K⊂ S, an (I,J)-coset , and a (J,K)-coset , the composition * is said to be reduced if (w_J)., or equivalently p.(w_Jq), is reduced. An expression I_∙ = [I_0,I_1,⋯,I_m]=(W_I_0eW_I_1)*(W_I_1eW_I_2)*⋯ *(W_I_m-1eW_I_m)is reduced if the composition of the (I_0,I_i)-coset _i = (W_I_0eW_I_1)*(W_I_1eW_I_2)*⋯ *(W_I_i-1eW_I_i) and the (I_i,I_i+1)-coset W_I_ieW_I_i+1 is reduced for each of i=1,⋯ m-1.If 1≤ i ≤ m-1 is such that I_i⊃ I_i+1 then the latter condition at i is automatic; if I_i⊃ I_i+1 then reducedness at i is equivalent to the two conditions p_i=p_i+1 and (p_i)=(p_i+1).The condition in Definition <ref> is equivalent to the more symmetric condition I_∙ =(W_I_0eW_I_1).(W_I_1eW_I_2).⋯ .(W_I_m-1eW_I_m),by which we mean every composition that can be involved in the right hand side, in all possible order of composition, is reduced. See <cit.> for other equivalent criteria and explanations; our formulation in Definition <ref> in terms of redundancy is the original definition from Williamson <cit.>.The singular Matsumoto theorem <cit.> states that every reduced expression of an (I,J)-coset is related by braid relations. §.§ NilCoxeter algebras This subsection introduces the nilCoxeter algebras and nilCoxeter algebroids. We follow <cit.> and refer to it for all omitted details.Letbe a field.The nilCoxeter algebra (W,S) associated to a Coxeter system (W,S) is the -linear algebra presented by generators and relations as follows.The set of generators is {∂_s | s∈ S}; the generating relations are * the Coxeter braid relations, that is, for s,t∈ S with m=m_st<∞∂_s∂_t⋯_m = ∂_t∂_s ⋯_m * the nil-quadratic relations, that is, for s∈ S ∂_s ∂_s = 0.Then (W,S) has a -basis indexed by the elements in (W,S), or equivalently the elements in (W,S,*), and moreover has the same reduced expressions as for the Coxeter group (W,S) or the Coxeter monoid (W,S,*). In fact, (W,S) is the associated graded of the group algebra W (resp., the linearization of the Coxeter monoid (W,*)) filtered with respect to the length function. The nilCoxeter algebroid, or the singular Demazure algebroid, =(W,S) is the -linear category whoseobjects are the finitary subsets I⊂ S and whose morphisms and their composition are defined via the following presentation by generators and relations. * The generating morphisms are∂_[I,Is]:Is→ Iand∂_[Is,I]:I→ Is,for Is⊂ S finitary. That is, a morphism in (W,S) is a -linear combination of compositions of some morphisms of the form (<ref>). We denote such compositions as expressions as in (<ref>) and (<ref>), for example, ∂_[I+s-s]=∂_[I,Is,I]:=∂_[I,Is]∘∂_[Is,I].* The generating relations are * the up-up and down-down relations∂_[I+s+t] = ∂_[I+t+s], ∂_[I-s-t] = ∂_[I-t-s]; * the switchback relation ∂_[I +- ] = ∂_[I - u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_d-1+u_d-2 -u_d+u_d-1]; * the nil-quadratic relation∂_[I-s+s]= 0.The subset I⊂ S and elements s,t∈ S involved in the generating relations are only required to make the expressions well-defined, and u_∙∈ S is the rotation sequence for (I,,) (see Definition <ref>).In other words, (W,S) has the same presentation as (the liniarization of) (W,S) except it satisfiesthe nil-quadratic relation instead of the *-quadratic relation.<cit.> shows that (W,S) agrees with the category of Demazure operators (on the symmetric algebra of a reflection faithful and balanced realization).In particular, we have_(W,S)(I,J) = ⟨∂_⟩_∈ W_I\ W/W_J =_(W,S)(I,J),for each morphisms space in (W,S) where the set {∂_p | ∈ W_I\ W/W_J}=_(W,S)(I,J) is a -basis. Moreover, each basis element ∂_ has the same reduced expressions as ∈ W_I\ W/W_J in (W,S) does. § CORE AND ATOMSThis section contains no original content and is based on <cit.>. As <cit.> is not available at the moment, we repeat some of its proofs here.Let p be an (I,J)-coset. We call p a core coset[Note that we call a core coset a coset with full redundancy in <cit.> but switch to the current terminology in all other papers.]if (p) = I and (p) = J, or equivalently if I = p J p^-1. It follows from (<ref>) and the suceeding discussion that Definition <ref> is equivalent to Definition <ref>. The following result allows us to reduce problems on double cosets to that on core cosets.<cit.> Given an (I,J)-coset p, the ((p),(p))-coset p^:=W_(p)pW_(p) is a core coset. Moreover, if p^ M_∙ is a reduced expression then p[[I⊃(p)]]∘ M_∙∘ [[(p)⊂ J]]is a reduced expression. Here is an important observation about reduced expressions of a core coset and of a general coset. <cit.> Let p be an (I,J)-coset with a reduced expression p [I_0,⋯, I_m]. Letting p_i[I_0,⋯,I_i], the sequence (p_i) is decreasing. If furthermore p is core, then I=(p_i) for all i≤ m. <cit.> proves a number of additional facts on core cosets. In particular, composition of core cosets can be understood as follows.<cit.> Let p be a (I,J)-coset and q be a (J,K)-coset and supposer=p*q is reduced. If p and q are core cosets, then r is a core coset.Conversely, if r is a core coset and |J|=|I|=|K|, then p and q are core cosets. Let p,q,r be as in the first statement.We claim that p * q*w_K is also reduced, which, in this case, is equivalent to (p * q)∩ K= ∅.In fact, if s∈(p * q)∩ K, then, by the exchange property (see, e.g., <cit.>), either the product q*s or p*t is not reduced, where t= qsq∈qKq = J, contradicting the minimality of p in p and q in q.It follows, using <cit.>, thatp.q.w_K = p.q = (pw_J).qholds, which says p*q is reduced.By definition and <cit.>, we havep*q = p*q = p*w_J*w_J*q=p*w_J*q =p*q*w_K,in particular, p.q∈ p*q. Note that p.q.w_K shown above shows that p.q has a right descent disjoint from K, and similarly p.q has a left descent disjoint from I. Thus we have p.q = p*q.Then <cit.> shows that p*q is core, since the right hand side of (<ref>) is equal to p.q.w_K = p*q.w_K. This proves the first claim.Now we let r be core as in the converse statement. Let us show that p is core. The proof for q is similar, or follows by reversing expressions. By Lemma <ref> we have (r) ⊂(p). But (r) = I and (p) ⊂ I so (p) = I. Then |(p)|=|I|=|J|, thus (p)=J as well. This completes the proof. Now we introduce the atoms of this paper.<cit.> A coset is called an atomic coset (or an atom) if it is a core coset and has a reduced expression of the form [I+s-t]. We emphasize the core condition. In fact, a coset of the form [I+s-t] is core exactly when t = w_Is s w_Is.Thus, the atoms are indexed by the pairs (I,s), with I⊂ S finitary and s∈ S∖ I. Another way to characterize an atom among the cosets of the form [I+s-t] is that an atom does not allow for braid relations: <cit.> An atomic coset has a unique reduced expression. More precisely, a coset of the form [I+s-t] is atomic if and only ifhas a unique expression; whenis not atomic the switchback relation applies. Thus a single atom is not a subject of investigation. What we study is the composition of atoms: An expression of the form[L_0,I_0,L_1,I_1,⋯ I_m-1,L_m]=[L_0+s_1-t_1+⋯ +s_m-t_m] is called an atomic expression if, for each 0≤ i<m, the coset_i [L_i,I_i,L_i+1]=[L_i+s_i+1-t_i+1]is core (thus an atom). Using Proposition <ref>, we also call a composition of atomic cosets _1*_2*⋯ *_m an atomic expression. In this sense, a result of <cit.> is that every core coset decomposes into atoms. In fact, it is more precise:<cit.> Letbe a core (I,J)-coset.* For each ∈()∖ I, there is a reduced atomic expression of the form[I+-t+⋯].* For each ∈()∖ J, there is a reduced atomic expression of the form [I+⋯ + u-]. Note that ()=I implies =w_I∈ W_I and thusis a trivial (I,I)-coset with the empty atomic expression [I]. We prove the first claim by induction on p. The second claim has a similar proof.The base case is given by Remark <ref>, which also allows us to assume that I ⊊(r). Let s ∈(r) ∖ I.<cit.> gives a reduced expression r [I,Is, …, J]. Let n be the (Is,J)-coset such that r = [I,Is] . n. The left redundancy of n is at most the size of J, so it is a proper subset of Is. By Proposition <ref>, for any t ∈ Is ∖(n), the coset n has a reduced expression of the form [Is, Is ∖ t, … J]. Let q be the (Is ∖ t,J)-coset such that n = [Is,Is∖ t] . q. Then r =. q for [I,Is, Is ∖ t]. Thenis an atom and q is a core coset by Lemma <ref>. Since q<p (see Definition <ref>, the induction step is established. PART:Atomic relations § SWITCHBACK RELATIONS VERSUS ATOMIC BRAID RELATIONSIn this section we fix a finite Coxeter system (W,S) and employ Notation <ref>.We recommend the reader to have in mind that the discussion in this section is to be applied in the setting where we have a not-necessarily-finite Coxeter system (W,S) and S⊂S is a finitary subset generating W in W.For s∈ S, we write s = w_S s w_S. For L⊂ S, we write = {s | s∈ L}. Behind what follows, we have a subset L⊂ S such that |S∖ L|=2, say S∖ L = {,s'}.If =s' then we set =; if ≠ s' then we set = s'.The resulting ,∈ S are any two elements with the condition ≠ (or equivalently ≠), and here is where we start. Our convention throughout the section is that,∈ S are two elements with the condition ≠, where = is possible. <cit.> Given ,∈ S with ≠, the rotation sequence associated to (S,,) is defined as follows.Letu_0 = s ,u_-1 =, I_0 =S∖ u_0, I_-1 =S∖ u_-1and define the elements u_i ∈ S and subsets I_i ⊂ S, for i∈ℤ, by u_i+1 = w_I_i u_i-1 w_I_i,I_i = S ∖ u_i. See <cit.> for further discussions on the rotation sequence and examples (some other examples appear below in Examples <ref>-<ref>), but beware that <cit.> uses the index (S∖,,) for the rotation sequence we associate to (S,,). Here we give an alternative characterization of the rotation sequences. Let u_∙ be the rotation sequence for (S,,) and let I_i=S∖ u_i, L_i=I_i∖ u_i-1. Then_i=[L_i,I_i,L_i+1] = [L_i +u_i-1-u_i+1]is an atomic coset for each i∈ℤ. Conversely, if u_i∈ S and I_i are as above, with the cases i=0,-1 given by (<ref>), such that (<ref>) is atomic, then u_∙ is the rotation sequence for (S,,). Both claims are direct consequences of the definitions.It follows from Proposition <ref> that[L_0,I_0,L_1,I_1,L_2,I_2,⋯,L_k,I_k,L_k+1] = [L_0+ u_-1- u_1+u_0 -u_2 +u_1⋯+u_k-2 -u_k+u_k-1 - u_k+1]is an atomic expression for each k≥ 0. The expression (<ref>) is reduced if k is small enough, e.g., if k=0. This leads to the following definition. Let d(S,,) be the maximal k≥ 0 such that the atomic expression [K+ u_-1- u_1+u_0 -u_2 +u_1⋯+u_k-2 -u_k+u_k-1 - u_k+1] ,where K=L_0 =S∖{,},is reduced.<cit.> Let u_∙ be the rotation sequence for (S,,). Then, for d =d(S,,), we have u_d =. Moreover, the relation[I_0, S, I_d]=[I_0+-][I_0- u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_d-1+u_d-2 -u_d+u_d-1] ,between two reduced expressions holds.The relation of this form is called a switchback relation. In general, a composition of the form [I⊂ J]∘ - or -∘[I⊃ J] applied to a reduced expression is a reduced expression (see Definition <ref>; for example, the (I,J)-coset [I⊂ J] has the maximal element =w_J, whence (w_Jw_J).=e.= gives that [I⊂ J]∘ J_∙ is reduced for any reduced expression J_∙). In particular, [I_0 -u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_k-1+u_k-2 -u_k+u_k-1]is reduced if and only if (<ref>) is reduced. Therefore, the claims follow directly from <cit.>. An observation for later use:The (I_0,I_d)-coset involved in the switchback relation (<ref>) is not a core coset but has the left and right redundancies L_1=I_0∖ u_1 and L_d=I_d∖ u_d-1 respectively. This can be seen in the left hand side of (<ref>) by direct computations but also in the right hand side of (<ref>) in view of the low road in the sense of <cit.>.We record an additional information on the rotation sequence.Let u_∙ be the rotation sequence for (S,,) and d=d(S,,), as above. Thenu_d+1 =. By <cit.>, reversing (<ref>) yields the switchback relation for the coset W_Ls w_S W_Lt [Ls,S,Lt]. Thus the rotation sequence v_∙ appearing in this switchback relation is given by v_i = u_d-i.At the same time, the rotation sequence v_∙ satisfy v_0=t and v_-1= as given in Definition <ref>.In particular, we have u_d+1= v_-1= as desired.The first sentence in the proof of Proposition <ref> says that[L_0+ u- u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_d-1+u_d-2 -u_d+u_d-1 - u'],where u,u'∈ S are any elements making (<ref>) well-defined, is reduced.Among such u∈ S, a unique choice which makes[L_0+ u- u_1] atomic is u=u_-1. Similarly, if the last factor [L_d+u_d-1 - u'] is atomic then u'=u_d+1 must be the case.These observations show that (<ref>) extends uniquely to the equality between atomic reduced expressions[L_0,I_0,S,I_d,L_d+1] = [L_0 + u_-1 +-- u_d+1] [L_0 +u_-1 -u_1+u_0 -u_2 +u_1 … -u_d-1+u_d-2 -u_d+u_d-1 - u_d+1][L_0+u_-1 -u_1+u_0 -u_2 +u_1 … -u_d-1+u_d-2 -u_d+u_d-1 - u_d+1] which also agrees with the maximal atomic expression from Definition <ref>. Now we obtain a new atomic reduced expression for (<ref>) giving rise to a `slightly zoomed out' relation between expressions of double cosets.For each pair ,∈ S such that ≠, the relation[K +u_-1 -u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_d-1+u_d-2 -u_d+u_d-1 - u_d+1][K+_d+1 -_d-1 +_d -_d-2 +_d-1 -_d-3+_d-2 … -_1+_2 -_0+_1- _-1]holds, where K=S∖{,}, the rotation sequence u_∙ is associated to the triple (S,,), and d=d(S,,). In terms of the generating relations in , the relation (<ref>) is the composition[K +u_-1 -u_1+u_0 -u_2 +u_1 -u_3+u_2 … -u_d-1+u_d-2 -u_d+u_d-1 - u_d+1]switchback[K +u_-1 +u_0 -u_d- u_d+1]downdownupup[K +u_0 +u_-1 - u_d+1-u_d]=[K +_d+1 +_d - _0-_-1]switchback[K+_d+1 -_d-1 +_d -_d-2 +_d-1 -_d-3+_d-2 … -_1+_2 -_0+_1- _-1]where the middle relations is the composition of two commuting relations upup and downdown. Note that K=L_0 and the first relation in (<ref>) is (<ref>) written reversed. The next relation (upup and downdown) in (<ref>) is clear and, since u_0= =_d+1, by Lemma <ref>, and u_d = = _-1, by Proposition <ref>, we arrive at the expression[K + _d+1+ u_-1 - u_d+1 - _-1].The expression (<ref>) contains the non-atomic expression[K+u_-1 - u_d+1] = [K+- ]where the equality between the elements is given again by Lemma <ref>. The switchback relation for (<ref>) differ from (<ref>) by the automorphism u↦ and reversing the expression, namely,[Ks , S, S∖s] [Ks - _d-1 + -_d-2 +_d-1 -_d-3+_d-2 … -_1+_2 -+_1]=[Ks - _d-1 +_d -_d-2 +_d-1 -_d-3+_d-2 … -_1+_2 -_0+_1].Putting (<ref>) back into (<ref>), we obtain the rest of (<ref>).This completes the proof. The atomic braid relation associated to (S,s,t), for t≠s, is the relation(<ref>). The atomic braid relation (<ref>) is analogous to the regular braid relation, say between two simple generators a and b, with m_ab = d+1. For example,If S={s,t},then (<ref>) is exactly the regular braid relation{st⋯_m_st}[∅,{s},∅,{t},∅,⋯ ]_2m_st[∅,{t},∅,{s},∅, ⋯ ]_2m_st{ts⋯_m_st}. A switchback relation with d=1 is of the form [+s-t]=[-t+s] (this happens when s,t belong to different irreducible compoments of S), so the associated atomic braid relation is the m=2 braid relation given by the composition[K+u-s+t-v]switchback [K+u+t-s-v]downdownupup [K+t+u-v-s]switchback [K+t-v+u-s]. If S is of type A, a switchback relation has either d=1 or d=2. So the atomic braid relations in this case are either described in Example <ref> or of the form resembling the m=3 braid relation. In a general (finite) type, the number d+1 may be strictly bigger than any m_st for s,t∈ S. Here is an example in type D_4 with d+1 = 4[{s_1,s_3}+s_2 -s_2+s_4 -s_4+s_2 -s_2+s_4 -s_4] [{s_1,s_3}+s_4 -s_4+s_2 -s_2+s_4 -s_4+s_2 -s_2],where s_2 is the special element in S={s_1,⋯,s_4} corresponding to the hub of the D_4 Dynkin diagram.Here is an example in type E_8 with d+1 = 8[K +s_1 -s_3 +s_4-s_7+s_3-s_5+s_7-s_7+s_5-s_3+s_7-s_4+s_3-s_1+s_4-s_4] [K +s_4-s_4+s_1-s_3+s_4-s_7+s_3-s_5+s_7-s_7+s_5-s_3+s_7-s_4+s_3-s_1]where K={s_2, s_3, s_5, s_6, s_7, s_8} and it is an exercise for the reader to find how we index S={s_1,⋯,s_8}. The atomic braid relation associated to (S,s,t) is the same as that associated to (S,t,s). It follows that the atomic braid relations are parametrized by the (unordered) subsets {{s,t}⊂ S | t≠s},while the switchback relations are parametrized by the ordered pairs {(s,t)∈ S× S | t≠s}. The full list of the switchback relations, for irreducible S, is given in <cit.>, and <cit.> shows that the switchback relation for (S,s,t) is trivial, i.e., of the form [L+s-t][L+t-s] when s,t belong to different connected component of S and has the same form as the switchback relation for (S^0,s,t) if s,t∈ S^0 for a connected component S^0.This gives a classification of the atomic braid relations. An implicit point of Proposition <ref> is another characterization of atomic braid relations, which we make explicit in the following two propositions.Let s,t∈ S be such that t≠s and let p [K+ +-- ].Then the reduced expression graph ofis of the form (<ref>) (together with the expressions obtained by upup only, or downdown only, in the middle, as in (<ref>) below).By Proposition <ref> and Matsumoto's theorem <cit.>, it is enough to show that no braid relation inother than what appears in (<ref>) is applicable to each of the four (plus two, hidden behind the commutativity of upup and downdown) expressions in (<ref>).First consider the first expression in (<ref>). Since it has alternating signs, no upup or downdown relation is applicable. Also, by Proposition <ref> and Proposition <ref>, no switchback relation is applicable to any [u_i-1-u_i+1].Therefore it remains to note that the only (consecutive) subexpression where a switchback relation is applicable is[-u_1 ⋯ + u_d-1], since such a subexpression cannot start earlier or later for sign reasons and cannot be a subexpression of [-u_1 ⋯ + u_d-1] by maximality of d in switchback relations.For the second expression in (<ref>), it is clear that there is one place that upup is applicable, one place that downdown is applicable, and one place that switchback is applicable. For the hidden expressions [K+ +--] and [K++ - - ], no switchback is applicable to the atomic subexpressions [ - ] and [ -], respectively, and thus upup and downdown are the only applicable relations. The rest follows by symmetry.Letbe a double coset of the form [K+s+t-v-u]. *Ifhas a rex graph of the form [3em] [3em][dl,dash,swap,"downdown"] E_3[dr,dash,"upup"] [3em] [3em] E_1 [r,dash,"switchback"] E_2[dr,dash,swap,"upup"][dl,dash,"downdown"] E_5 E_6 [l,dash,"switchback"]E_4,where E_1,⋯ ,E_6 are the reduced expressions of ,then the composition E_1 E_6 is an atomic braid relation.* If t≠ w_Kst v w_Kst and if applying the switchback relation for [+t-v] in [K+s+t-v-u] yields an atomic expression, then the rex graph ofis of the form (<ref>) (which gives an atomic braid relation by Part (<ref>)). We set S = Kst and, by renaming, assume that t≠v (if t=v then swap v and u and apply the downdown relation). Let us prove the first statement.Let p [K+s+t-v-u] has a rex graph (<ref>). Switchback, upup, and downdown relations are applicable to [K+s+t-v-u] so we have to have[K+s+t-v-u]=E_2, up to the symmetry of (<ref>). Then E_1 is of the form [K+s -u_1+u_0 -u_2 + u_1⋯ - u_d + u_d-1 -u] with the rotation sequence u_∙. Since E_1 has no other edge in the graph, it involves no other switchback relations, in particular, [+s-u_1] and [+u_d-1 - u] are atomic. The latter implies s=u_-1 and u = u_d+1 (see remark <ref>) and the claim is proved. Now suppose the right hand side of [K+s+t-v-u]switchback[K+s -u_1+u_0 -u_2 + u_1⋯ - u_d + u_d-1 -u]is atomic. Then we have s=u_-1 and u = u_d+1, and Proposition <ref> and Proposition <ref> gives the second statement.§ THE ATOMIC MATSUMOTO THEOREM In this section we let (W,S) be an arbitrary Coxeter system and I,J⊂ S be finitary subsets. Given a core (I,J)-coset , any two atomic reduced expressions ofare related by a composition of atomic braid relations. Given two atomic expressions L_∙ and K_∙ of a core coset, we write L_∙∼ K_∙ if L_∙ and K_∙ are related by a composition of atomic braid relations. Recall that the width of an expression [L_0,L_1,⋯, L_m] is the number m. We proceed by induction on the minimal width of atomic reduced expressions of . The base case [I] has a unique reduced expression, hence the claim of the theorem is trivially true. Suppose that the claim is true for every core coset with an atomic reduced expression of width <m.Let p L_∙ = [L_0,L_1,⋯, L_m] =[I+s-s' +… - s”]andp K_∙ = [K_0,K_1,⋯, K_l] =[I+t-t' + … - t”]be two atomic reduced expressions (where l is any positive integer). If s=t, then we also haves'= w_Is s w_Is=t' since L_∙,K_∙ are atomic, thus L_i=K_i for i=0,1,2. Therefore, [L_2,⋯, L_m] [K_2,⋯, K_l] are two atomic reduced expressions covered by the induction hypothesis, and the claim L_∙∼ K_∙ follows from [L_2,⋯, L_m]∼ [K_2,⋯, K_l]. It is thus enough to consider the case s≠ t. By <cit.> , we have s,t∈(p) and by <cit.> there exist reduced expressions (called high roads) of the formp[I,Is,Ist]∘ J_∙ [I,It,Ist]∘ J_∙,where J_∙ expresses some (Ist,J)-coset . Then we have q J q= qpI pq⊂ W_Ist I W_Ist = W_Istwhere we use that p is core in the first equality and q∈ W_Istp in the inclusion. By Kilmoyer's theorem <cit.>, we obtain Ist⊃q Jq and thus(q) = Ist∩q Jq= q Jq. Note also that |(q)| = |J|=|I| = |Ist|-2. Naming{u,v} =Ist∖(q), we have by Proposition <ref>q [Ist - u - v]∘ q^from which it followsp[I+s+t-v-u]∘ q^We may and do assume that u≠ w_Ist s w_Ist and v≠ w_Ist t w_Ist, since we can swap u and v if an equality holds. Applying the switchback relation for (Ist,t, v)to (<ref>), we getp [I + s + t - v - u ] ∘ q^ [I + s -u_1 + u_0 - … - u_d + u_d-1 -u ]∘ q^ .By Lemma <ref> below, the expression [I + s -u_1 + u_0 - … -u_d + u_d-1 -u ] is atomic. Since [I+s-s'] is also atomic (recall the atomic expression (<ref>)) we have u_1=s'. For a visual effect let us introduce a new letter u':=u_d-1.Now, taking an atomic reduced expression M_∙ q^, we obtain the atomic expression [L_0,L_1,L_2⋯, L_m] =[I+s-s' +… - s”][I + s -s' + t - … - v + u' -u ]∘ M_∙and[L_2,⋯,L_m] [(Is∖ s')+t - … - v + u' -u ]∘ M_∙where the right hand side, being a subexpression of such, is an atomic reduced expression. Then induction hypothesis gives [L_2,⋯, L_m]∼[(Is∖ s')+t - … - v+ u' -u ]∘ M_∙ , from which it follows L_∙∼ [I + s -s' + t - … -v + u' -u ]∘ M_∙ .Since the same arguments establishes (an analogue of) (<ref>) for K_∙, wehaveL_∙ (<ref>) for L_∙∼[I + s -s' + t - … -v + u' -u ]∘ M_∙switchback[I + s + t - v - u ] ∘ M_∙downdownupup[I + t + s - u - v ]∘ M_∙switchback [I +t -t'+s - … -u + v' - v ]∘ M_∙(<ref>) for K_∙∼ K_∙,where, by Proposition <ref>, the middle three relations compose to an atomic braid relation. The proof is complete.If u_∙ is a rotation sequence and [I + s -u_1 + u_0 - … - u_d + u_d-1 -u ]∘is a core coset, then[I + s -u_1 + u_0 - … - u_d + u_d-1 -u ] is an atomic expression. By Proposition <ref>, it is enough to show that [I+s-u_1] is atomic and that [+u_d-1-u] is atomic.Let us prove that [I+s-u_1] is atomic. Suppose not. Then, by Proposition <ref>, a switchback relation applies to [I+s-u_1] to give a reduced expression ofstarting with some [I ⊃ I'].Since the redundancy decreases in the first step of the latter expression (see also remark <ref>), this contradicts being a core coset (see e.g., <cit.>).Similarly, if the subexpression [+u_d-1-u] is not atomic, then remark <ref> applies here to result in a contradictory size of the redundancy for p.Thoerem <ref> does not say that the atomic reduced expressions are in correspondence with the reduced expressions in some Coxeter group.Although this is true in type A and type B (as is proved in <cit.>), new structures appear in other types. See Section <ref>.Theorem <ref> does not say that any relation between two atomic reduced expressions inis a composition of atomic braid relations.For a core cosetand its atomic reduced expressions, it seems more reasonable to consider the rex graph whose edges are the atomic braid relations, rather than the full subgraph of the rex graph forin .All atomic reduced expression of a given core coset are of the same width.This follows from Theorem <ref> since every atomic braid relation relates expressions of the same width. The width of an atomic expression of a core cosetis an even number 2l, where l is the number of atomic cosets that are composed in the expression.Corollary <ref> says that the following is well-defined. The atomic length of a core cosetis half of the width of an atomic expression of . One may define the atomic length for all double cosets by setting the atomic length of ∈ to be the atomic length of ^, but this definition lacks desirable properties.For example, the atomic length of (p∘ q)^ need not agree with the sum of the atomic lengths of ^ and of ^.Also, the atomic length is not compatible with the Bruhat order in the sense that there exist (I,J)-cosets ≤ q where the atomic length of ^ is strictly bigger than that of ^, even for =q^ atomic. § NON-BRAID RELATIONS We continue letting (W,S) be an arbitrary Coxeter system. Let us introduce two classes of non-braid relations between atomic expressions. Associated to an atomic expression inof the form [I+s-s], we define the atomic quadratic relation as the composition[I+s-s+s-s] *-quadratic [I+s-s] .Associated to an atomic expression inof the form [I+s-t] with s≠ t, we define the atomic cubic relation as the composition (of two commuting relations)[I+s-t+t-s+s-t] *-quadratic*-quadratic [I+s-t].The relation (<ref>) still holds when s=t but is generated by (<ref>), so we exclude such cases to have less number of relations.The left hand side of the atomic quadratic relation (<ref>) and the atomic cubic relation (<ref>) are indeed atomic expressions. The only nontrivial part is [(Is∖ t) +t-s] in (<ref>), but the latter is the inverse of the atomic expression [I+s-t]. The atomic quadratic relation (or the absence of other quadratic relations) is motivated by the following statement from <cit.> (the first part is a special case of Lemma <ref>).<cit.> Let [I+s-t] be an atomic (I,J)-expression (where J=Is∖ t) and [J+u-v] be an atomic (J,K)-expression. *If =* [I+s-t]∘[J+u-v]=[I+s-t+u-v] is reduced, thenis a core coset.*If the composition =* is not reduced, thenis core if and only if I=J=K and s=t=u=v, in which case [I+s-s]. The first part is a special case of Lemma <ref>. The `if' claim follows from the *-quadratic relation [Is-s+s][Is] which also givesp[I+s-s] in this case. Now suppose p=* [I+s-t+u-v] is not reduced while p is core. Then q [I+s-t+u] is not reduced, that is, q< or (q) ⊋(). The latter is not the case sinceis core. Since = q.x for some x∈ W_Ju and since the only right descent of is t, we have t=x∈ Ju, thus t=u. It follows s=v and I=K.Finally, the *-quadratic relaton gives p=[I+s-s] implying that p is an atom and thus p=. The other equalities follows.Let p be a core (I,J)-coset and q be a core (J,K)-coset. If r=p*q is reduced then r is a core coset. By Proposition <ref>, both p and q has atomic expressions. Thus it is enough to assume that q [J+s-t] is an atomic coset. Since r=p*q p∘ [J+s-t] =p∘ [J+s] ∘[Js-t]is reduced, the subexpression p∘ [J+s] is reduced and thus we haveI = (p)=(p∘ [J+s]) and pW_Js=p. Also (p∘ [J+s-t]) = I∩rKrI=(p)=(p∘ [J+s])=(p∘ [J+s-t])=(r).(see e.g. <cit.>) Recall that by atomic expression we mean a composition of (the unique reduced expressions of) atomic cosets, which is not necessarily reduced or core. We introduce an auxiliary definition useful for this section. An atomic expression [I_0,⋯, I_2ℓ] is said to be admissible if each [I_0⋯, I_2m] for m≤ℓ expresses a core coxet. A reduced atomic expression [I_0,⋯, I_2ℓ] is automatically admissible since redundancy for all p_i [I_0,⋯,I_i] are the same (see <cit.>).The following lemma generalizes Proposition <ref> (<ref>). A reduced atomic expression of a core coset is admissible. We prove the claim by induction on the atomic length ℓ of the core coset p. Let p=_1._2.⋯._ℓ be a reduced product for atomic cosets _i. By induction it is enough to show that p=q.r for q=_1._2.⋯ ._m and r ℓ =max{atomic length of p, atomic length of q}.The base case ℓ=1 is Proposition <ref> (<ref>). Ifis a core (I,J)-coset andis a core (J,K)-coset such that * is reduced, then * is core.We then have the following analogue of the Coxeter presentation. A non-reduced admissible atomic expression is related to a reduced atomic expression by a composition of atomic braid relations, atomic quadratic relations, and atomic cubic relations. Moreover, atomic quadratic and cubic relations may be applied only from the left hand side to the right hand side of (<ref>) and (<ref>). Our proof of Proposition <ref> uses the following analogue of the (weak) exchange property for Coxeter groups.Letbe a core (I,J)-coset and suppose r = q∘[J+s] is a (well-defined) non-reduced composition. * We have q> r.* The core cosethas an atomic reduced expression ending in [-s]. Since ⊂ r we have q≥r. Suppose q=r. Then(r)def=Js∩r I rq=r= Js∩q I q = Js∩ J = (q), where we use thatis a core coset in the last two equalities. It follows thatq∘[J+s] is reduced, contrary to the assumption. This proves the first claim. The condition q> r implies t∈(q) for some t∈ Js. But since (q)∩ J =∅, we have t=s. Then s∈(q)⊂ (w_I.q)=(q) sinceis core. Now Proposition <ref> guarantees an atomic reduced expression ofending with [-s].We proceed by induction on the atomic length, which agrees with the induction on the width. Our induction begins with two base cases, namely the reduced expressions [I] and [I+s-t], where the claim is trivial. Let m≥ 1 and suppose the claim is true for any non-reduced admissible atomic expression of atomic length ≤ m and let p _0*⋯ *_mI_∙ = [I_0,⋯ , I_2m+2]= [I+s_0-t_0+s_1-t_1 ⋯ +s_m-t_m]be a non-reduced admissible atomic expression of atomic length m+1, where each _i is an atomic coset. We work with the `expression' _0*⋯ *_m instead of the expression I_∙, identifying each _i with its unique reduced expression [I_2i+s_i-t_i]. The proper subexpression _0*⋯ *_m-1 is core since I_∙ is an admissible atomic expression.Applying the induction hypothesis, we assume that p_0*⋯ *_m-1= [I_0,⋯, I_2m] is reduced. Since I_∙ is not reduced, we have that r q∘ [I_2m+s_m] is not reduced. By Lemma <ref>, there is an atomic reduced exprssion M_∙=[M_0,⋯ ,M_2m] so thatI_∙∼ M_∙∘ [I_2m+s_m - t_m] = [M_0,⋯ , M_2m-2]∘ [I_2m-2+t -s_m]∘ [I_2m+ s_m - t_m]holds for some t∈ S. Here `∼' denotes a composition of atomic braid relations and follows by Theorem <ref>. The latter also justifies the width of M_∙ being 2m. Since [I_2m-2+t -s_m]=[M_2m-2,M_2m-1,M_2m] is atomic, we have in fact t = w_I_2ms_m s_m w_I_2ms_m. Since [I_2m+s_m-t_m] is also atomic, we have t=t_m. Now there are two cases.The easy case is when s_m=t_m. In this case, the atomic quadratic relation (<ref>) applies to the last part of (<ref>), in the correct direction. The resulting atomic expression has the atomic length m. This establishes the induction step in the case s_m=t_m. Let us consider the other case, that is, suppose s_m≠ t_m. Then [M_2m-2+t_m-s_m+s_m-t_m][M_2m-2+t_m-t_m]neither is atomic nor expresses a core coset. In particular, we cannot have [M_2m-2+t_m-s_m+s_m-t_m], i.e., we have m≥ 2. Now consider the non-atomic composition[M_0,⋯, M_2m-2]∘ [+t_m-t_m]which is not reduced by Lemma <ref>. Since [M_0,⋯, M_2m-2] is reduced, it is the step `+t_m' that is not reduced, namely, we are again in the situation in Lemma <ref>.Lemma <ref> gives an atomic reduced expression N_∙ [M_0,⋯, M_2m-2]ending in `-t_m'. Note that N_2m-2=M_2m-2 = I_2m-2.We see that (<ref>) continues toI_∙ ∼ [M_0,⋯ , M_2m-4, M_2m-3, M_2m-2] ∘ [+t_m-s_m+s_m-t_m]∼ [N_0,⋯ , N_2m-4, N_2m-3, N_2m-2] ∘ [+t_m-s_m+s_m-t_m] = [N_0,⋯, N_2m-4]∘ [+u-t_m+t_m-s_m+s_m-t_m]for some u∈ S. The second equivalence in (<ref>) follows from Theorem <ref> since[M_0,⋯, M_2m-2] N_∙ involve reduced atomic expressions. Since N_∙, and thus [N_2m-4+u-t_m], is an atomic expression we necessarily have u = s_m. Applying the cubic relation (<ref>) to the last part of (<ref>), in the correct direction, completes the induction step. All admissible atomic expressions of a (core) coset are related by atomic braid relations, atomic quadratic relations, and atomic cubic relations. This follows from Proposition <ref>, Remark <ref>, and Theorem <ref>.We may have a stronger analogue of the Coxeter presentation than Corollary <ref> by including non-admissible atomic expressions. The proof could be by induction similar to that of Proposition <ref>, but we need to consider the possibility thatis not a core coset. The latter leads us to remark <ref>.We may have a different analogue of the Coxeter presentation than Corollary <ref> by including atomic expressions of non-core cosets, namely, all cosets generated by the atomic cosets. There a class of generating non-braid relation would be[I+s-t+t-s] [I+s-s]arising from the composition * for each atomic coset [I+s-t]. But we may need more relations for a presentation whose proof may also be not similar to ours. A consequence of Corollary <ref> is a new presentation of Demazure operators, which is a `zoomed-out' version of the `zoomed-in' presentation in <cit.>.To make this precise, we introduce a subcategory. The atomic Demazure algebroid ^ is the subcategory of the nilCoxeter algebroidspanned by the morphisms ∂_ associated to core cosets . Definition <ref> does define a category, that is, ^ is closed under composition. By Proposition <ref> and Proposition <ref>, a reduced composition of core cosets is belongs to ^. But any non-reduced composition in , being zero, also belongs to ^.The category ^ admits the following presentation by generators and relations. Its generating set is {∂_ |is an atomic double coset}and its (generating) relations are* the atomic braid relations;* the atomic nil-quadratic relations∂_∘∂_=0 for each generator ∂_.By Proposition <ref>, any core coset has an atomic reduced expression, proving that ^ has the claimed generator. Since an atomic braid relation is a composition of singular braid relations, the atomic braid relations hold in . For the atomic nil-quadratic relation, first note that a double cosetis atomic if and only ifis atomic, so it is a relation between atomic compositions.That the atomic nil-quadratic relation hold follows from the composition *[I∖ s+s-t]∘ [I∖ t+t-s]=[I∖ s+s]∘ [I-t+t] ∘ [I -s]being not reduced.Now we show that the above relations are generating. Theorem <ref> covers the reduced expressions, i.e., the nonzero compositions in ^.It remains to show that if ∂_[_∙]=∂__1∘∂__2∘⋯∘∂__m is zero, for _i atomic, then the expression _∙=[_0,⋯,_m] is related by atomic braid relations to an expression [_0,⋯,_m] with_k=_k+1 at some index k. Since both the atomic quadratic relation and the atomic cubic relation contains a product of the form *,Corollary <ref> gives the claim for when _∙ is an admissible atomic expression. Now suppose _∙ is not an admissible atomic expression. In this case, there exists a proper admissible subexpression = _0*_1*⋯ * _k of atomic length ≥ 1 such that r = _0*⋯ * _k*_k+1 is not a core coset. If = _0*_1*⋯ * _k is not reduced then, as in the above paragraph,Corollary <ref> gives the claim.If = _0*_1*⋯ * _k is reduced, thenhas an atomic reduced expression ending at [L+u-s], where s∈ S is such that _k+1 [K+s-t], by Lemma <ref>. By Thoerem <ref>, we may assume that _k=[L+u-s]. But t=w_Kssw_Ks = u implies _k=_k+1. Now suppose _∙ is not an admissible atomic expression. In this case, since any atom _i is admissible, there exists a proper consecutive subexpression = _i*⋯ * _k of atomic length ≥ 1 which is admissible, so that r = _i*⋯ * _k*_k+1 (or r=_i-1*_i*⋯ * _k where the proof goes the same way) is not core. Writing _k+1 [K+s-t], by Lemma <ref>has an atomic reduced expression ending at [L+u-s], and by Corollary <ref>, we may assume that _k=[L+u-s]. But t=w_Kssw_Ks = u implies _k=_k+1 as claimed. PART:Core combinatorics § CORANK 2 CASE Let (W,S) be a finite Coxeter system. Note that if |S∖ I| = |S∖ J| ∈{0,1} then a core (I,J)-coset is either the identity coset or the unique atomic (I,J)-coset and thus has a unique reduced expression which is atomic. In this section we consider the next easiest case, namely, the core (I,J)-cosets where |S∖ I| = |S∖ J| = 2. Our goal is to explain how the discussion from Section <ref>completely describes the atomic reduced expressions. We denote the dihedral group of order 2m by W(I_2(m)) viewed as the Coxeter group with the generators {a,b} and the relation ab⋯_m =ba⋯_m.Then W(I_2(m)) has exactly 2m+1 reduced expressions, whose list ise, a, b, ba, ab, ⋯ , m, m.where we use the notation k = ⋯ ba_k ,k= ⋯ ab_k. The following lemma is implicit in the construction of the switchback relation.Let J⊂ S be such that S∖ J = {s_1,s_2} for s_1≠ s_2 and let k>0.For i=1,2, there exists exactly one atomic expression, denoted by E(i,k), ending in [Js_i,J] of atomic length k. The expression E(1,k) is reduced if and only if k≤ d+1, whered+1 is the atomic length of the atomic braid relation associated to (S,s_1,s_2),i.e., d=d(S,s_1,s_2)(see Definition <ref>and recall s_1 = w_S s_1 w_S). Moreover, E(1,d+1) E(2,d+1) is an atomic braid relation. Note that there exists exactly two atomic (I,J)-cosets, for exactly two I⊂ S, namely, [Js_i∖ w_Js_is_iw_Js_i,Js_i,J].Similarly, if L⊂ S is any subset with |L|=|J|, then there exists exactly two atomic (I,L)-cosets where I runs. Thus, at each step of composing atomic cosets, from the right to left, we have exactly two choices of composable atomic cosets.Except of at the first step, one of the two choices yields a non-reduced subexpression of the form [+s-t+t-s]. It follows that there is a unique atomic expression, call it E(i,k), of atomic length k>0 ending at [Js_i,J], which does not contain a subexpression of the form [+s-t+t-s]. By Proposition <ref>, this is of the form (<ref>) for the rotation sequence u_∙ associated to (S,u_k,u_k-1)(these are the same rotation sequence up to shift, which arises since we start from the right here). If k=d+1 then we obtain the atomic expression in the left hand side of (<ref>). The claim thus follows from Definition <ref> and Proposition <ref>.Let (W,S) be a finite Coxeter system and letJ⊂ S with |S∖ J|=2. Then we have a bijectionϕ:W(I_2(d+1)) →{core (I,J)-cosets where I⊂ S varies}=: Jwhere d+1 is the atomic length of the atomic braid relation associated to(S,S∖ J) (see Definition <ref> and remark <ref>). Moreover, the bijection is compatible with the braid relations in the sense that the only braid relation mmin W(I_2(d+1))corresponds to the only atomic braid relation in J, which is on the element ϕ(m)=ϕ(m). We construct two such bijections ϕ_1, ϕ_2 in the proof. Let S∖ J = {s_1,s_2}.Then Lemma <ref> and the related discussion in Section <ref> applies. Define Φ_i: {all reduced expressions in I_2(d+1)}→{atomic reduced expressions for (W,S)}by letting Φ_1(k)=E(1,k), Φ_1(k)=E(2,k), from Lemma <ref> and the other way for Φ_2.Each Φ_i is injective by construction.The only braid relation in the domain of Φ_i is d+1d+1. Lemma <ref> says that Φ_i maps this to the atomic braid relation Φ_i(d+1)Φ_i(d+1). By Proposition <ref>, the latter is the only atomic braid relation in the image of Φ_i.Now the classical Matsumoto theorem and the atomic Matsumoto theorem (Theorem <ref>) together show that Φ_i induces a well-defined injection ϕ_i satisfying the second claim of the proposition. That ϕ_i is surjective follows from Proposition <ref>. This completes the proof. The following examples are deduced from <cit.> and Proposition <ref>. Let (W,S) be of type D. Let s_1∈ S be the unique element having three edges in the associated Dynkin diagram.If s_1∈ J⊂ S with |S∖ J|=2, then J and its atomic reduced expressions correspond to that of the dihedral group of type A_2. If s_1∉J⊂ S with |S∖ J|=2, then J and its atomic reduced expressions correspond to that of the dihedral group of type B_2. This is a convenient point to introduce the following abuse of notation. Let (W,S) be of type H_4 with the indexing[scale=0.4,baseline=-3] (4 cm,0.05cm) – (2 cm,0.05 cm); (4 cm,-0.05cm) – (2 cm,-0.05 cm); (4 cm,0cm) – (2 cm,0 cm) ; (2 cm,0) – (0 cm,0); (0 cm,0) – (-2 cm,0); [fill=white] (3 cm, 0 cm) circle (0cm) node[below=0pt]5; [fill=white] (4 cm, 0 cm) circle (.15cm) node[above=1pt]4; [fill=white] (2 cm, 0 cm) circle (.15cm) node[above=1pt]3; [fill=white] (0 cm, 0 cm) circle (.15cm) node[above=1pt]2; [fill=white] (-2 cm, 0 cm) circle (.15cm) node[above=1pt]1; . Then {s_3,s_4} and its atomic reduced expressions correspond to that of the dihedral group of type I_2(10). If J ⊂ S is any subset of order two other than {s_3,s_4}, then J and its atomic reduced expressions correspond to that of the dihedral group of type I_2(12).If J⊂ S is such that S∖ J={s,t} with w_Jssw_Js = s and w_Jttw_Jt = t then ^_J consists only of (J,J)-cosets. Thus ^_J is a monoid and ϕ_i are upgraded to isomorphisms from (W(I_2(d+1)),*) (viewed as a category with one object) to ^_J so that ϕ_1∘ϕ_2 is the automorphism on (W(I_2(d+1)),*) swapping the two Coxeter generators. Similarly, we have an algebra isomorphism between the nilCoxeter algebra of type I_2(d+1) and ^_J in this case.In general, one can define an equivalence relation on the subsets of S generated by K∼ L if there exists an atomic (K,L)-coset and consider the subcategory^_(J):=⋃_K∈(J)^_Jin ^ associated to the equivalence class (J) of J. Then ^_(J) is a multiple object version of the nilCoxeter algebra D(I_2(d+1)). Viewing a bijection ϕ^K from Proposition <ref>, for each K∈(J), as a bijection from D(I_2(d+1)) to ^_K, we have a decomposition^_(J) = _K∈ (J)ϕ^K(D(I_2(d+1))). In our discussion in this section, the ambient Coxeter system (W,S) is assumed to be finite in order for Section <ref> to apply. The next remarkprovides some complement to this. Let (W,S) be infinite and J⊂ S finitary with S∖ J = {s_1,s_2} for s_1≠ s_2. If Js_i⊂ S is not finitary then there is no (I,J)-expression ending in [-s_i], for any I⊂ S.If Js_1⊂ S is finitary, then a reduced (I,J)-expression ending in [-s_1] is still of the form E(1,k), which exists for k< m where an infinitary subset K⊂ S is first involved at the m-th step. In particular, if S is of affine type then we have well-defined and distinct E(1,k) and E(2,k) for all k which identifies ^_J with the Coxeter monoid of type I_2(∞).§ GENERAL CASE§.§ The poset of left/right-fixed core cosetsLet (W,S) be a Coxeter system. The rest of the paper is devoted to general and special discussions onJ:={core (I,J)-cosets where I⊂ S varies}for J⊂ S finitary. Since non-reduced expressions do not appear in the rest of the paper, the discussions below are valid, up to linearization, for ^_J=⋃_I∈^_^(I,J)which is more naturally defined from the category ^ (recall that the core cosets indo not form a subcategory).We consider the left-fixed versions_I^:={core (I,J)-cosets where J⊂ S varies},_I^=⋃_J∈^_^(I,J)as well. In the special cases covered in Section <ref> or in <cit.>, the atomic reduced expressions and the atomic braid relations in J agrees with the ordinary reduced expressions and the ordinary braid relations in some Coxeter system. This is not the case in general. The easiest such example is the following. The order of {s} in type D_4, for any s∈ S, is 32. Moreover, each such {s} has is a unique maximal element of atomic length 7.(While these numbers are the same, the structure of {s}, e.g., the atomic reduced expression graphs, does depend on s∈ S.We discuss one of the two, up to symmetry, cases in detail in Section <ref>.)Here is some more result of computations on SageMath.We consider the four corank 3 cases in type H_4. We index S={s_1,s_2,s_3,s_4} as in [scale=0.4,baseline=-3] (4 cm,0.05cm) – (2 cm,0.05 cm); (4 cm,-0.05cm) – (2 cm,-0.05 cm); (4 cm,0cm) – (2 cm,0 cm) ; (2 cm,0) – (0 cm,0); (0 cm,0) – (-2 cm,0); [fill=white] (3 cm, 0 cm) circle (0cm) node[below=0pt]5; [fill=white] (4 cm, 0 cm) circle (.15cm) node[above=1pt]4; [fill=white] (2 cm, 0 cm) circle (.15cm) node[above=1pt]3; [fill=white] (0 cm, 0 cm) circle (.15cm) node[above=1pt]2; [fill=white] (-2 cm, 0 cm) circle (.15cm) node[above=1pt]1; . In all cases, the set {s_i} is of order 480 and contains a unique maximal element of atomic length 31.When i=1 the number of elements of atomic length j is the j-th number in the sequence1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 24, 24, 24, 24, 24, 24, 24, 24, 23, 21, 19, 17, 15, 13, 11, 9, 7, 5, 3, 1and the longest element has 25392 atomic reduced expressions; when i=2 we have the numbers1, 3, 5, 7, 9, 12, 14, 15, 17, 19, 22, 23, 22, 22, 23, 26, 26, 23, 22, 22, 23, 22, 19, 17, 15, 14, 12, 9, 7, 5, 3, 1and the longest element has 35032 atomic reduced expressions; when i=3 we have1, 3, 5, 7, 9, 12, 15, 16, 17, 19, 21, 22, 22, 23, 24, 24, 24, 24, 23, 22, 22, 21, 19, 17, 16, 15, 12, 9, 7, 5, 3, 1and the longest element has 36874 atomic reduced expressions; when i=4 we have1, 3, 5, 7, 9, 12, 15, 17, 18, 18, 19, 21, 23, 24, 24, 24, 24, 24, 24, 23, 21, 19, 18, 18, 17, 15, 12, 9, 7, 5, 3, 1and the longest element 36746 atomic reduced expressions. That all atomic length sequences are symmetric is because the involution s↦s is the identity in type H, which we explain in Proposition <ref>. The objects J do share a number of general properties of the Coxeter monoids. We investigate some of such in this section. Recall that, for S finitary, we set J = w_SJw_S where w_S is the longest element in (W_S,S). Let S be finitary. Then for I,J⊂ S, the (I,J)-coset W_Iw_SW_J is core if and only if I= J. The claim is trivial when |I|≠ |J|, so we assume |I|=|J|.Let = W_I w_S W_J and consider the element w_Sw_J∈. We write w_Sw_J = x.p for some x∈ W_I. Then p J p = x w_S w_J J w_J w_S x = x w_S J w_S x = xJ xholds. Now we have(p) def.= I∩pJp= W_I∩pJp(<ref>)= W_I∩ xJ x conjugate≅ x W_I x∩Jx∈ W_I= W_I∩J = I∩Jwhere the second equality is due to Kilmoyer's theorem <cit.>. Sinceis core if and only if (p) = |J|, this proves the claim. An immediate consequence of Lemma <ref> is:Let S be finitary and J⊂ S. The core coset _J=W_ J w_S W_J is the unique maximal element in J. In a Coxeter group, multiplication by the longest element w_0 induces an anti-involution on the strong and weak Bruhat poset.Thus we have the same anti-involution on a Coxeter monoid, even though the *-multiplication by an element cannot induce an anti-involution.To define such anti-involutions directly in terms of the *-multiplication, we do as follows:Given w∈ W, there is a unique element x∈ W satisfying w.x=w_0. Letting w↦ x, we obtain an anti-involution on the left Bruhat order.Proposition <ref> is its atomic analogue.We first fix notation for the left and right Bruaht orders on . The (weak) left Bruaht order is defined and denoted asp≤_l qif = r. for some r∈ ;the (weak) right Bruaht order is defined and denoted asp≤_r qif = p.r for some r∈ .Let S be finitary and J⊂ S. If J = J then there exists an anti-involution of the poset (J,≤_l). More generally, there is an anti-isomorphism of the posets (J,≤_l)→ (_ J^,≤_r). Given a core (I,J)-coset , by (the right-to-left analogue of) <cit.> and its proof, there is a unique ( J, I)-cosetsuch that * = W_ J w_S W_J. The assignment ↦ defines a map f:(J,≤_l)→ ( _ J^,≤_r) whose inverse is given by a similar map. Thus f is an anti-isomorphism claimed in the more general statement.Now, if J= J then reversing expressions induces a poset isomorphismg:( _ J^,≤_r)→ (J,≤_l).The anti-involution g∘ f proves the first claim. One may also consider the strong Bruhat order on J as restricted from(see <cit.>), but for the purpose of studying J the weak Bruhat order seems more natural. §.§ A corank 3 example Let (W,S) be of type D_4 and let c∈ S:={c,t,u,v} be the simple generator corresponding to the center of the Dynkin diagram of type D_4. In this subsection we describe the atomic reduced expressions for P:={c}.The atomic cosets in P are ^c_t : [t, ct,c],^c_u : [u,cu,c],^c_v : [v,cv,c].Here t is a shorthand for {t}, ct denotes {c,t}, and so on. Note that the S_3-symmetry permuting t,u,v induces an S_3-symmetry on P.The other atomic cosets that appears in atomic reduced expressions in P are^t_c: [c,ct,t],^t_u: [t,tu,t]up to the S_3-symmetry. We drop the superscripts when it is clear from the context: for example, there is a unique s∈ S which makes ^s_v*^c_t well-defined, namely s=t, so we write _v*^c_t instead.Since each element in P can be composed on the left by three atoms, and one of those three produces a subexpression of the form [-s]∘ [+s], the number of reduced expressions in P of (atomic) length m is bounded by 3· 2^m-1.A direct computation (or <cit.>) determines the full list of atomic braid relations in P.The atomic braid relations for (I,{c})-cosets are _c_t^c_u = _c_u^c_t,_c_t^c_v = _c_v^c_t,_c_v^c_u = _c_u^c_v,but we also need the atomic braid relations_t_u^v = _u_t^v , _t_v^u= _v_t^u , _v_u^t = _u_v^t,since they give relations in P, for example,_t_u^v^c_v= _u_t^v^c_v , _t_v^u ^c_u= _v_t^u^c_u , _v_u^t^c_t= _u_v^t^c_t . Then it is easy to see (or use Corollary <ref>) that there are 6 elements in P of (atomic) length two, each of which has a unique reduced expression. From (<ref>) and (<ref>), it follows that the 12 potentially-reduced length three expressions are indeed reduced and give 6 elements in P of length three. Each of these has two reduced expressionsThere is only one way to extend a length three expression in P to a length four expression in P, because a length three expression is related by braid relation to another that has a different atom on the left.So we have 6 elements of length four in P each of which has two reduced expressions. Now we may stop computing and apply Proposition <ref>, since the maximal element W_cw_S W_c= {w_S, cw_S=w_Sc}∈ Phas atomic length seven. This determines in particular the number of elements in P of length four, five, six, seven, namely, 6, 6, 3, 1, respectively, as well as their weak Bruhat relations.This also determines the number of reduced expressions for each element, namely all elements of length four (resp., five, six, seven) has 2 (resp., 4, 8, 24) reduced expressions. A few additional direct computations determine the Hasse diagram for the weak Bruhat poset P, which we include below. The diagram is aligned according to the atomic lengths. The number on each vertex is the number of atomic reduced expressions for the represented element. [scale=0.8](1) at (0,2) 1;(11) at (-2,1) 1;(12) at (0,1) 1;(13) at (2,1) 1;(21) at (-5,0) 1;(22) at (-3,0) 1;(23) at (-1,0) 1;(24) at (1,0) 1;(25) at (3,0) 1;(26) at (5,0) 1;(31) at (-5,-1) 2;(32) at (-3,-1) 2;(33) at (-1,-1) 2;(34) at (1,-1) 2;(35) at (3,-1) 2;(36) at (5,-1) 2;(41) at (-5,-2) 2;(42) at (-3,-2) 2;(43) at (-1,-2) 2;(44) at (1,-2) 2;(45) at (3,-2) 2;(46) at (5,-2) 2;(51) at (-5,-3) 4;(52) at (-3,-3) 4;(53) at (-1,-3) 4;(54) at (1,-3) 4;(55) at (3,-3) 4;(56) at (5,-3) 4;(61) at (-2,-4) 8;(62) at (0,-4) 8;(63) at (2,-4) 8;(7) at (0,-5) 24;(1) – (11) – (21) – (31) – (41) – (51) –(61) – (7) (11) – (22) – (32) – (42) – (51) (1) – (12) – (23) – (31)(43) – (53) – (62) – (7) (12) – (24) – (34) – (44) – (52) – (61) (1) – (13) – (25) – (33) – (43) – (53) – (62) (13) – (26) (21) – (32)(22) – (33) (23) – (34) (24) – (35) (25) – (36) (26) – (35) (26) – (36) (35) – (45) – (55) – (63) – (7) (36) – (46) (41) – (52) (42) – (53) (43) – (54) (44) – (55) (45) – (56) (46) – (54) (46) – (56) (54) – (62) (56) – (63);Proposition <ref> below says that <cit.> illustrates our P. If S is of type D_n+2 and S∖ J ={s_0,s_0̅, s_n} corresponds to the three extreme vertices in the Dynkin diagram, then the reduced expression graphs and the left Bruhat order on J are the same as our example in type D_4. The same explanation applies.When S is of type D and J⊂ S is such than |S∖ J|=4, the object J can have:The above list covers all such J up to type D_11. § TITS CONE INTERSECTIONS Let us consider the Tits cone (W,S) associated to (W,S) (see e.g. <cit.>, <cit.>, or <cit.>). To briefly recall, the Tits cone is defined as the subset(W,S)=⋃_w∈ Ww(C)⊂ V^*whereV is the geometric (standard) realization of (W,S) over ℝ with the basis {α_s}_s∈ S andC is the closure of the standard chamber C={θ∈ V^* | θ(α_s)>0for all s∈ S}.<cit.> Given a subset I⊂ S, the Tits I-cone is the intersection 𝖢𝗈𝗇𝖾(W,S,I) := 𝖢𝗈𝗇𝖾(W,S)∩{θ∈ V^*| θ(α_s)=0 for s∈ I}.The intersection (W,S,I) gives rise to a new kind of chamber geometry, as developed in <cit.>.Let us fix I⊂ S for the rest of the section.For w∈ W and J⊂ S, letC_J:={θ∈ V^* | θ(α_s)>0 for s∈ S∖ J and θ(α_s)=0 for s∈ J}be the standard J-chamber in (W,S,J).One sees from the definition that the image w(C_J) is either contained in (W,S,I) or disjoint from (W,S,I). A subset in (W,S,I) of the form w(C_J), with some w∈ W and J⊂ S with |J|=|I|, is called an I-chamber. Let us denote by (W,S,I) the set of I-chambers in (W,S,I). Then <cit.> provides the following relation between the Tits cone intersection and the core double cosets. First note that, for a core (I,J)-coset p, we have p=W_IxW_J=xW_J as sets, where x∈ p. We thus let p(C_J):=p(C_J)=x(C_J). For each finitary subset I⊂ S, we have a bijection_I^(W,S) →𝖢𝗁𝖺𝗆(W,S,I)sending a core (I,J)-coset p to the chamber p(C_J). For the claimed bijection to be well-defined, we need that p(C_J) is a chamber in (W,S,I). But if θ∈ C_J ands∈ I, then (p(θ))(α_s)=(p(θ))(α_s)=θ(α_p sp) =0since p sp∈p Ip =p(p)p =(p)= J,which shows that p(C_J)⊂(W,S,I).Now let w(C_J)∈(W,S,I), i.e., w(C_J)⊂(W,S,I) for some w∈ W and J⊂ S with |J|=|I|. We need to show that W_Iw=wW_J, from which it follows that p:=W_IwW_J=W_Iw=wW_J is coreand that our map is bijective. But if x∈ W_I then the assumtion w(C_J)⊂(W,S,I) implies xw(C_J) = w(C_J) and thus xw∈ wW_J. This shows W_Iw⊂ wW_J, which suffices because of the assumption |I|=|J|. Proposition <ref> is merely an interpretation of <cit.> in our setting. Those readers familiar with <cit.> may prefer:<cit.> states that(W,S,I) is in bijection with the set 𝖼𝖢𝗁𝖺𝗆(W,S,I) of combinatorial chambers [Note that 𝖼𝖢𝗁𝖺𝗆(W,S,I) is what <cit.> denotes by (W,S,I). See <cit.>.], which consists of the pairs (x, J), for x∈ W and J⊂ S, satisfying *ℓ(x) = min{ℓ(y) | y∈ xW_J};*W_Ix = xW_Jvia (x,J)↦ x(C_J). It is therefore enough to show that the map _I^(W,S) →𝖼𝖢𝗁𝖺𝗆(W,S,I) given by p↦ (p,J) is a bijection.In fact,(<ref>)is equivalent to W_IxW_J being a core (I,J)-coset, so(x,J)↦ W_IxW_J defines a left inverse. Condition (<ref>)makes it a right inverse. Proposition <ref> readily establish a similar connection to the Demazure algebroid as well.We have_I⊂ Sfinitary(W,S,I)[basis]^(W,S)which restricts to _K∈ (I)(W,S,K)[basis]_(I)^(W,S)for each I⊂ S (see (<ref>)).Since the categoriesandhave the same morphisms, up to linearization, and the same reduced expression with respect to the Coxeter presentations (see Section <ref>), we have_I^(W,S) [basis] _I^(W,S).for each I⊂ S finitary. The claim follows by Proposition <ref>.Proposition <ref> interprets Theorem <ref> as a generators and relations presentation of all (resp., equivalent) I-chambers.While the current paper only considers finitary subsets I⊂ S, there is no finiteness assumption in defining (W,S) and (W,S). However, for the application in <cit.> to tilting theory and birational geometry, it is enough to assume that S is affine or finite.In either case, a proper subset in S is automatically finitary, so our setting suffices for the purpose. | http://arxiv.org/abs/2312.16666v1 | {
"authors": [
"Hankyung Ko"
],
"categories": [
"math.CO",
"math.GR",
"math.RT"
],
"primary_category": "math.CO",
"published": "20231227182518",
"title": "An atomic Coxeter presentation"
} |
[ Nobutaka Nakazono January 14, 2024 =====================Particle-based Variational Inference (ParVI) methodsapproximate the target distribution by iteratively evolving finite weighted particle systems. Recent advances of ParVI methods reveal the benefits of accelerated position update strategies and dynamic weight adjustment approaches. In this paper, we propose the first ParVI framework that possesses both accelerated position update and dynamical weight adjustment simultaneously, named the General Accelerated Dynamic-Weight Particle-based Variational Inference (GAD-PVI) framework. Generally, GAD-PVI simulates the semi-Hamiltonian gradient flow on a novel Information-Fisher-Rao space, which yields an additional decrease on the local functional dissipation. GAD-PVI is compatible with different dissimilarity functionals and associated smoothing approaches under three information metrics. Experiments on both synthetic and real-world data demonstrate the faster convergence and reduced approximation error of GAD-PVI methods over the state-of-the-art. § INTRODUCTIONParticle-based Variational Inference (ParVI) methods have gained significant attention in the Bayesian inference literatureowing to their effectiveness in providing approximations of the target posterior distribution π<cit.>. The essence of ParVI lies in deterministically evolving a system of finite weighted particlesby simulating the probability space gradient flow ofcertain dissimilarity functional (μ) := 𝒟(μ|π)vanishing at μ = π <cit.>. Since the seminal work Stein Variational Gradient Descent (SVGD) <cit.>, classical ParVI focus on simulating the first-order gradient flow in Wasserstein space. By using different dissimilarity and associated smoothing approaches, various effective ParVI methods have been proposed, including the BLOB method<cit.>, the GFSD method<cit.>,and the KSDD method<cit.>. To improve the efficiency of ParVIs, recent works explore different aspects of the underlying geometry structures in the probability space and design two types of refined particle systems with either accelerated position update or dynamic weight adjustment. * Accelerated position update. By considering the second-order Riemannian information of the Wasserstein probability space, different accelerated position update strategies have been proposed<cit.>: <cit.> follows the accelerated gradient descend methods in the Wasserstein propability space <cit.> and derives the WNES and WAG methods, which update the particles' positions with an extra momentum; the ACCEL method<cit.> directly discretizes the Hamiltonian gradient flow in the Wasserstein space and update the position with the damped velocity field,which effectively decrease the Hamiltonian potential of the particle system.Later, <cit.> consider the Hamiltonian gradient flow for general information probability space<cit.>, and derive novel accelerated position update strategies according to the Kalman-Wasserstein/Stein Hamiltonian flow. They theoretically show that the Hamiltonian flow usually has a faster convergence to the equilibrium compared with the original first-order counterpart under mild condition. Numerous experimental studies demonstrate that these accelerated position update strategies usually drift the particle system to the target distribution more efficiently<cit.>.* Dynamic weight adjustment. Delving into the orthogonality structure of the Wasserstein-Fisher-Rao(WFR) space, <cit.> develop the first dynamic-weight ParVI (DPVI) methods. Specifically, they derive effective dynamical weight adjustment approaches by mimicing the reaction variational step in a JKO splitting scheme of first-order WFR gradient flow<cit.>. Compared with the commonly used fixed weight strategy, these dynamical weight adjustment schemes usaully lead to less approximation error, especially when the number of particles is limited<cit.>.Contribution:In this paper, we propose the first ParVI methods which possess both accelerated position update and dynamical weight adjustment simultaneously. Specifically, we first construct a novel Information-Fisher-Rao (IFR) probability space, which augment the origianl information space with an orthogonal Fisher-Rao structure.SVGD methods can be also viewed as using KL dissimilarity without Smoothing Approach under Stein-Metric<cit.>. Then, we originate a novel Semi-Hamiltonian IFR (SHIFR) flow in this space, which simplifies the influence of the kinetic energy on the velocity feild in the Hamiltonian IFR flow [Though the Hamiltonian IFR flow seems a natural choice, it is generally infeasible to obtain practical algorithm by discretizing this flow. Please check the Appendix A.2<ref> for a detailed discussion of IFR Hamiltonian flow.]. By discretizing the SHIFR flow, a practical General Accelerated Dynamic-weightParticle-based Variational Inference (GAD-PVI) framework is proposed.The main contribution of our paper are listed as follows:*We investigate the convergence property of the SHIFR flow and show that the target distribution π is the stationary distribution of the proposed semi-Hamiltonian flow for proper dissimilarity functional 𝒟(·|π). Moreover, our theoretical result also shows that the augmented Fisher-Rao structure yields an additional decrease on thelocal functional dissipation, compared to the Hamiltonian flow in the vanilla information space. * We derive an effective finite-particle approximation to the SHIFR flow, which directly evolves the position, weight, and velocity of the particles via a set of ordinary differential equations. The finite particle system is compatible with different dissimilarity and associated smoothing approaches. We prove that the mean-field limit of the proposed particle systemconverges to the exact SHIFR flow under mild condition. * By adopting explicit Euler discretization to the finite-particle system, we architect the General Accelerated Dynamic-weight Particle-based VariationalInference (GAD-PVI) framework, which update positions in an acceleration manner and dynamically adjust weights. We derive nine GAD-PVI instances by using three different dissimilarity functionals and associated smoothing approaches(KL-BLOB, KL-GFSD and KSD-KSDD) on the Wasserstein/Kalman-Wasserstein/Stein IFR space, respectively.We evaluate our algorithms on various synthetic and real-world tasks. The empirical results demonstrate the superiority of our GAD-PVI methods. Notation.Given a probability measure μ on ^d, we denote μ∈_2(^d) if its second moment is finite. For a given functional (·):_2(^d) →,δ(μ̃)/δμ(·):^d → denote its first variation at μ=μ̃. We use C(^n) to denote the set of continuous functions map from ℝ^n to ℝ. We denote ^i ∈^d as the i-th particle, for i∈{1...M}. We denote the Dirac delta distribution with point mass located at ^i as δ_^i,and use f*g:^d → to denote the convolution operationbetween f:^d → and g:^d →.Besides, we use ∇ and ∇· () to denotethe gradient and the divergence operator, respectively. We denote a general information probability space as (𝒫(^n),G(μ)), where G(μ)[·] denotes the one-to-one information metric tensor mapping elements in the tangent space T_μ𝒫(^n) ⊂ C(^n)to the cotangent space T^*_μ𝒫(^n) ⊂ C(^n). The inverse map of G(μ)[·] is denotedas G^-1(μ)[·]:T^*_μ𝒫(^n)→ T_μ𝒫(^n).§ PRELIMINARIESWhen dealing with Bayesian inference tasks,variational inference methods approximate the target posterior π withan easy-to-sample distribution μ, and recast the inference task as an optimization problemover 𝒫_2(ℝ^d) <cit.>: min_μ∈𝒫_2(^n)ℱ(μ):=𝒟(μ|π).To solve this optimization problem,Particle-based Variational Inference (ParVI) methods generally simulate the gradientflow of ℱ(μ) in certain probability space with a finite particle system, which transport the initial empirical distribution towards the targetdistribution π iteratively. Given an information metric tensor G(μ)[·],the gradient flow in the information probability space (𝒫(^n),G(μ)) takes the following form<cit.>:∂_t μ_t = -G(μ_t)^-1[δℱ(μ_t) /δμ]. §.§ Wasserstein Gradient Flow and Classical ParVIsSince the seminal work Stein Variational Gradient Descent (SVGD)<cit.>, many ParVI methods focus on flows in the Wasserstein space, where the inverse of the Wasserstein metric tensor writes G^W(μ)^-1[Φ] = -∇· (μ∇Φ),Φ∈ T_μ^* 𝒫(^n),and the Wasserstein gradient flow is defined as∂_t μ_t = ∇· (μ_t∇δℱ(μ_t) /δμ).Based on the probability flow (<ref>) on the density, existing ParVIs maintain a set of particles ^i_t and directly modify the particle position according to the following ordinary differential equationd^i_t = ∇δℱ(μ̃_t) /δμ (^i_t)d t, where μ̃_t = ∑_i=1^M w^i_t δ_^i_tdenotes the empirical distribution. Since the first total variation δℱ(μ̃_t)/δμof ℱ might be not well-defined for the discrete empirical distribution, various ParVI methods have proposed by choosing different dissimilarity ℱ and associated smoothing approaches for δℱ(μ̃_̃t̃) /δμ, e.g., KL-BLOB<cit.>, KL-GFSD<cit.>,and KSD-KSDD<cit.>.§.§ Hamiltonian Gradient Flows and Accelerated ParVIsThe following Hamiltonian gradient flow in the general information probability space has recently been utilized to derive more efficient ParVI methods{ ∂_tμ_t= δ/δΦℋ(μ_t,Φ_t), ∂_tΦ_t=- γ_tΦ_t- δ/δμℋ(μ_t,Φ_t),.where Φ_t denote the Hamiltonian velocity and ℋ(μ_t,Φ_t) = 1/2∫Φ_tG(μ_t)^-1[Φ_t] dx + ℱ(μ_t) denotes the Hamiltonian potential. Note that the Hamiltonian flow (<ref>) can be regarded as the second-order accelerated versionof the information gradient flow (<ref>), and usually converges faster to the equilibrium of the target distribution under mild condition<cit.>.Though the form of the Hamiltonian flow (<ref>) seems complicated, it induces a simple augmented particle system (_t^i,_t^i), which evolves the position _t^iand velocity _t^i of particles simultaneously. As the position update rule of _t^i also uses the extra velocity information,the induced system is said to have an accelerated position update. By discretizing the continuous particle system, several accelerated ParVI methods have been proposed, which converge faster to the target distribution in numerous real-world Bayesian inference tasks <cit.>. §.§ Wasserstein-Fisher-Rao Flow and Dynamic-weight ParVIsRecently, the Wasserstein-Fisher-Rao (WFR) Flow has been used to derive effective dynamic weight adjustment approaches to mitigate the fixed-weight restriction of ParVIs<cit.>.The inverse of WFR metric tensor is G^WFR(μ)^-1[Φ] = -∇· (μ∇Φ)+(Φ-∫Φ)μ,where Φ∈ T^*_μ𝒫(^n), and the WFR gradient flow writes:∂_t μ_t =∇· (μ_t∇δℱ(μ_t) /δμ) _Wasserstein transport-(δℱ(μ_t)/δμ-∫δℱ(μ_t)/δμdμ_t)μ_t_Fisher-Rao variational distortion.Since the WFR space can be regarded as orthogonal sum of the Wasserstein space and the Fisher-Rao space, <cit.> mimic a JKO splitting scheme for the WFR flow,which deal with the position and the weight with the Wasserstein transport and the Fisher-Rao variational distortion, respectively.Given a set of particles with position ^i_t and weight _t^i, the Fisher-Rao distortion can be approximated by the following oded/dt w^i_t=-(δℱ(μ̃_t)/δμ(^i_t)- ∑_i=1^M w^i_t δℱ(μ̃_t)/δμ(^i_t))w^i_t.According to the ode (<ref>), <cit.> derive two dynamical weight-adjustment scheme and propose the Dynamic-Weight Particle-Based Variational Inference (DPVI) framework, which is compatible with several dissimilarity functionals and associated smoothing approaches. § METHODOLOGYIn this section, we present our General Accelerated Dynamic-weight Particle-based Variational Inference (GAD-PVI) framework, detailted in Algorithm <ref>. We first introduce a novel augmented Information-Fisher-Rao space, and originate the Semi-Hamiltonian-Information-Fisher-Rao (SHIFR) flow in the space. The theoretical analysis on SHIFR shows that it usually possesses an additional decrease on the local functional dissipation compared to the Hamiltonian flow in the original information space. Then, effective finite-particle systems, which directly evolve the position, weight, and velocity of the particles via a set of ordinary differential equations, are constructed based on SHIFR flows in several IFR spaces with different underlying information metric tensors. We demonstrate that the mean-field limit of the constructed particle system exactly converges to the SHIFR flow in the corresponding probability space. Next, we develop the GAD-PVI framework by discretizing these continuous-time finite-particles formulations,which enables simultaneous accelerated updates of particles' positions and dynamic adjustment of particles' weights. We present nine effective GAD-PVI algorithms that use different underlying information metric tensors,dissimilarity functionals and the associated finite-particle smoothing approaches. §.§ Information-Fisher-Rao Space and Semi-Hamiltonian-Information-Fisher-Rao FlowTo define the augmented Information-Fisher-Rao probability space, we introduce the Information-Fisher-Rao metric tensor G^IFR(μ), whose inverse is defined as follows.G^IFR(μ)^-1[Φ] = G^I(μ)^-1[Φ]+(Φ-∫Φdμ)μ,where Φ∈ T^*_μ𝒫(^n) and G^I(μ) denotes certain underlying information metric tensor. Note that G^IFR(μ) is formed by the inf-convolution of G^I(μ) and Fisher-Rao metric tensor.Based on G^IFR(μ), we introduce the following novel semi-Hamiltonian flow of on the Information-Fisher-Rao space (𝒫(^n),G^IFR(μ)){ ∂_tμ_t=δ/δΦℋ^IFR(μ_t,Φ_t), ∂_tΦ_t=-γ_tΦ_t-1/2δ/δμ(∫Φ_tG^I(μ_t)^-1[Φ_t] dx)-δℱ(μ_t) /δμ. .where Φ_t denote the Hamiltonian velocity and ℋ^IFR(μ_t,Φ_t)=1/2∫Φ_tG^I(μ_t)^-1[Φ_t] dx_Information kinetic energy+1/2∫Φ_t(Φ_t-∫Φ dμ_t)dμ_t_Fisher-Rao kinetic energy +δℱ(μ_t) /δμ_potential energy,denotes the Hamiltonian potential in the IFR space. Compared to the full Hamiltonian flow ofin the IFR space,theSHIFR flow (<ref>) ignores the influence of the Fisher-Rao kinetic energy on the Hamiltonian field Φ_t. Later, we will show that SHIFR can be directly transformed into a particle system consisting of odes on the positions, velocities and weights of particles for proper underlying information metric tensor, while it is generally infeasible to obtain such a direct particle system according to the corresponding full Hamiltonian flow because it is difficult to handle the Fisher-Rao kinetic energy. As the kinetic energy term vanishes when near the equilibrium of the flow, therefore it is acceptable for the SHIFR flow to neglect this intractable term and still has the target distribution π as its stationary distribution. Moreover, this semi-Hamiltonian flow wouldconverge faster compare to the Hamiltonian flow in the original information space on account of extra local descending property.Due to the limit of space, we defer the discussion ofthe stationary analysis and functional dissipation quantitative analysis of the SHIFR flow to Appendix A.4. Please refer to Proposition 2and Proposition 3for details.With different underlying information metric tensor G^I(μ) in ℋ^IFR(μ_t,Φ_t), we can obtain different SHIFR flows. Suitable G^I(μ) includes the Wasserstein metric tensor,the Kalman-Wasserstein metric tensor (KW-metric)and the Stein metric tensor (S-metric). For instance, theSHIFR flow with Wasserstein metric (Wasserstein-SHIFR flow) writes:{ ∂_t μ_t =-∇·(μ_t∇Φ_t)-(δℱ(μ_t)/δμ- ∫δℱ(μ_t)/δμdμ_t)μ_t,∂_tΦ_t =-γ_tΦ_t-‖∇Φ_t‖^2-δℱ(μ_t)/δμ. .Note that in the subsequent section, we focus on the Wasserstein-SHIFR flow, and defer the detailed formulations with respect to KW-SHIFR and S-SHIFR to the Appendix B.1 and B.2 due to limited space.§.§ Finite-Particles Formulations toSHIFR flowsNow, we derive the finite-particle approximation to the SHIFR flow, which directly evolves the position ^i_t, weight w^i_t, and velocity ^i_t of the particles.Specifically, we construct the following ordinary differential equation system to simulate the Wasserstein-SHIFR flow (<ref>){d^i_t= ^i_tdt,d^i_t= (-γ^i_t - ∇δℱ(μ̃_t)/δμ(^i_t) )dt,d w^i_t= -(δℱ(μ̃_t)/δμ(^i_t)- ∑_i=1^M w^i_t δℱ(μ̃_t)/δμ(^i_t))w^i_tdt, μ̃_t= ∑_i=1^M w^i_tδ_^i_t. . The following proposition demonstrates that the mean-field limit of the particle system (<ref>) corresponds precisely to the Wasserstein-SHIFR flow in (<ref>).Suppose the empirical distribution ^M_0 of M weighted particles weakly converges to adistribution μ_0 when M →∞.Then, the path of (<ref>) starting from ^M_0 and Φ_0 with initial velocity 0 weakly converges to a solution of the Wasserstein-SHIFR gradient flow (<ref>) starting from μ_t|_t=0 = μ_0 = and Φ_t|_t=0 = 0 as M→∞: §.§ GAD-PVI FrameworkGenerally, it is impossible to obtain an analytic solution of the continuous finite-particles formulations (<ref>),thus a numerical integration method is required to derive an approximate solution. Note that any numerical solver, such as the implicit Euler method <cit.>and higher-order Runge-Kutta method <cit.> can be used. Here, we follow the tradition of ParVIs to adopt the first-order explicit Euler discretization <cit.>since it is efficient and easy-to-implement <cit.>, and propose our GAD-PVI framework,as listed in Algorithm <ref>. Dissimilarity Functionals and Smoothing Approaches To develop practical GAD-PVI methods, we must first select a dissimilarity functional .The commonly used underlying functionals areKL-divergence <cit.>and Kernel-Stein-Discrepancy <cit.>.Once a dissimilarity functionalhas been chosen,we need to select a smoothing approach to approximatethe first variation of the empirical approximation,as the value of δ(·)/μ at anempirical distribution μ̃ = ∑_i=1^M w^i δ_^i is generally not well-defined.Smoothing strategies allow us to approximate the first variation value at the discrete empirical distribution. Generally, a smoothed approximation to the first total variation is denoted as U_μ̃(·) δℱ(μ̃)/δμ(·). The commonly used smoothing approaches in the ParVI area,namely BLOB (with KL-divergence as ) <cit.>,GFSD (with KL-divergence as ) <cit.>,and KSDD (with Kernel Stein Discrepancy as ) <cit.>,are all compatible with our GAD-PVI framework.Here, we describe the dissimilarity functional KL-divergence and the associated BLOB smoothing approach as an example. The first total variation of the KL isδ(μ)/δμ(·):=δ KL(μ|π)/δμ(·)=- logπ(·)+logμ(·).As logμ() is ill-defined for the discrete empirical distribution μ̃_k,BLOB smoothing approach reformulate the intractable term logμ as δ/δμ𝔼_μ[logμ] and smooth the density with a kernel function K, resulting in the approximationlogμ̃ ≈δ/δμ̃𝔼_μ̃[log(μ̃*K)] :=log∑_i=1^Mw^iK(·,^i) + ∑_i=1^Mw^iK(·, ^i)/∑_j=1^M w^j K(^i, ^j). for a discrete density μ̃=∑_i=1^M w^i^i. This leads to the following approximation results: U_μ̃_k () = - logπ() + log∑_i=1^M w^i_k K(,^i_k)+ ∑_i=1^Mw^i_kK(,^i_k)/∑_j=1^M w^j_kK(^i_k,^j_k). Details regarding other dissimilarity functionals and smoothing approaches are included in the Appendix B.3. Updating rules Once the functionaland its empiricalapproximation of the first variationU_μ̃δℱ(μ̃)/δμ is decided, we adopt a Jacobi-type strategy toupdate the position ^i_k, velocity field ^i_k and the weight w^i_k, i.e., the calculations in the k+1-th iteration are totally based on the variables obtained in the k-th iteration. Therefore, starting from M weighted particles located at {_0^i}_i=1^Mwith weights {w_0^i}_i=0^M and {_0^i=0}_i=0^M, GAD-PVI w.r.t. the Wasserstein-SHIFR flow firstupdates the positions of particles according to the following rule: ^i_k+1 = ^i_k + η_pos_k^i.Then, it adjusts the velocity field as _k+1^i=(1-γη_vel)_k^i-η_vel∇ U_μ̃_k(_k^i),and particles' weights as following:w^i_k+1= w^i_k -η_wei (U_μ̃_k(_k^i)-∑_j = 1^Mw_k^j U_μ̃_k(_k^j) ).Here μ̃_k = ∑_i=1^M w^i_kδ_^i_k denotes the empirical distribution, andη_pos/η_vel/η_wei are the discretization stepsizes. It can be verified that the total mass of μ̃_k isconserved and μ̃_k remains a valid probability distributionduring the whole procedure of GAD-PVI, i.e. ∑_i w^i_k = 1 for all k. The detailed updating rules of GAD-PVI w.r.t. theKW-SHIFR and S-SHIFR can be found in Appendix B.3. Notice that, compared to the classical ParVIs, the position acceleration scheme anddynamic-weight scheme only bring little extra computational cost, because the number of time-complexity-bottleneck operation, i.e. calculation of U_μ̃ and ∇ U_μ̃, remains the same.An alternative Weight Adjusting Approach. Except for Continuous Adjusting (CA) strategy, the Duplicate/Kill (DK) strategy, which is a probabilistic discretization strategyto the Fisher-Rao part of (<ref>), can also be adopted in GAD-PVI. This strategy duplicates/kills particle ^i_k+1 according to an exponential clock with instantaneous rate: R^i_k+1= -η_wei (δℱ(μ̃_k)/δμ(_k^i)-∑_j = 1^Mw_k^j δℱ(μ̃_̃k̃)/δμ(_k^j) ).Specifically, if R^i_k+1>0, duplicate the particle ^i_k+1 with probability 1-exp(-R^i_k+1), and kill another one withuniform probability to conserve the total mass; if R^i_k+1<0, kill the particle ^i_k+1 with probability 1 - exp(R^i_k+1),and duplicate another one with uniform probability. By replacing the CA strategy (<ref>)in the GAD-PVI framework, we could obtain the DK variants of GAD-PVI methods. GAD-PVI instances. With different underlying information metric tensors (W-metric, KW-metic and S-metric), weight adjustment approaches(CA and DK) and dissimilarity functionals/associated smoothing approaches(KL-BLOB, KL-GFSD and KSD-KSDD), we can derive 18 different intances of GAD-PVI, named as WGAD/KWGAD/SGAD-CA/DK-BLOB/GFSD/KSDD. § EXPERIMENTSIn this section, we conduct empirical studies with our GAD-PVI algorithms. Here, we focus on the instances of GAD-PVI w.r.t. the W-SHIFR flows, i.e.,WGAD-CA/DK-BLOB/GFSD. The experimental results on methods w.r.t. the KW-SHIFR and S-SHIFR flows are provided in the Appendix C<ref>.Note that we do not include GAD-PVI methods with the KSDD smoothing approaches, as they are more computationally expensive and have been widely reported to be less stable<cit.>.We include four classes of methods as our baseline: classical ParVI algorithms (SVGD, GFSD and BLOB), the Nesterov accelerated ParVI algorithms (WNES-BLOB/GFSD), the Hamiltonian accelerated ParVI algorithms (WAIG-BLOB/GFSD) and the Dynamic-weight ParVI algorithms (DPVI-CA/DK-BLOB/GFSD).We compare the performance of these algorithms on two simulations,i.e., a 10-D Single-mode Gaussian model (SG) and a Gaussian mixture model (GMM), and two real-world applications, i.e. Gaussian Process (GP) regression and Bayesian neural network (BNN). For all the algorithms, the particles' weights are initialized to be equal. In the first three experiments, we tune the parameters to achieve the best W_2 distance. In the BNN task, we split 1/5 of the training set as our validation set to tune the parameters. Note that, the position step-size are tuned via grid search for the fixed-weight ParVI algorithms,then used in the corresponding dynamic-weight algorithms. The acceleration parameters and weight adjustment parameters are tuned via grid search for each specific algorithm. We repeat all the experiments 10 times and report the average results. Due to limited space, only parts of the results are reported in this section. We refer readers to the Appendix C<ref> for the results on SG and additional results for GMM, GP and BNN. §.§ Gaussian Mixture ModelWe consider approximating a 10-D Gaussian mixture model with two components, weighted by 1/3 and 2/3 respectively. We run all algorithms with particle number M∈{32, 64, 128, 256, 512}. In Figure <ref>, we report the 2-Wasserstein (W_2) distance between theempirical distribution generated by each algorithm and the target distributionw.r.t. iterations of different ParVI methods.We generate 5,000 samples from the target distribution π as reference to evaluate the W_2 distanceby using the POT library [http://jmlr.org/papers/v22/20-451.html].The results demonstrate that our GAD-PVI algorithms consistently outperform their counterpart with only one (or none) of the accelerated position update strategy and dynamic weight adjustment approach. Besides, the CA weight-adjustment approach usually result a lower W_2 compared to the DK scheme, and WGAD-CA-BLOB/GFSD usually have the fastest convergence and the lowest final W_2 distance to the target. §.§ Gaussian Process RegressionThe Gaussian Process (GP) model is widely adopted for the uncertainty quantification in regression problems <cit.>.We follow the experiment setting in <cit.>, and use the dataset LIDARwhich consists of 221 observations. In this task, we set the particle number to M=128 for all the algorithms.We report the W_2 distance between the empirical distribution after 10000 iterations and the target distribution in Table <ref>. The target distribution is approximated by 10000 reference particles generated by the HMC method after it achieves its equilibrium<cit.>. It can be observed that both the accelerated position update and the dynamic weight adjustment result in a decreased W_2 and GAD-PVI algorithms consistently achieve loweset W_2 to the target. Besides, the results also show that the CA variants usually outperforms their DK counterpart, as CA is able to adjust the weight continuously on [0,1] while DK set the weight either to 0 or 1/M. In Figure <ref>, we plot the contour lines of the log posterior and the particlesgenerated by four representative algorithms, namely BLOB, WAIG-BLOB, DPVI-CA-BLOB, and WGAD-CA-BLOB,at different iterations (0, 100, 500, 2000, 10000).The results indicate that the particles in WAIG-BLOB and WGAD-CA-BLOB exhibit a faster convergence to the high probability area of the targetdueto their accelerated position updating strategy, and the DPVI-CA and WGAD-CA algorithms finally offer a broader final coverage,as the CA dynamic weight adjustment strategy enables the particles to represent the region with arbitrary local density mass instead of a fixed 1/M mass. §.§ Bayesian Neural NetworkIn this experiment, we study a Bayesian regressiontask with Bayesian neural network on 4 datasets fromUCI and LIBSVM. We follow the experiment setting from <cit.>,which models the output as a Gaussian distribution and uses a Gamma(1, 0.1) prior for the inverse covariance. We use a one-hidden-layer neural network with 50 hidden units and maintain 128 particles.For all the datasets, we set the batchsize as 128.We present the Root Mean Squared Error (RMSE) of various ParVI algorithms in Table <ref>.The results demonstrate that the combination of the accelerated position updating strategyand the dynamically weighted adjustment leads to a lower RMSE.Notably, WGAD-CA type algrithms outperform other methods in the majority of cases. § CONCLUSIONIn this paper, we propose the General Accelerated Dynamic-Weight Particle-based Variational Inference (GAD-PVI) framework, which adopts an accelerated position update scheme and dynamic weight adjustment approach simutaneously. Our GAD-PVI framework is developed by discretizing the Semi-Hamiltonian Information Fisher-Rao (SHIFR) flow on the novel Information-Fisher-Rao space.The theoretical analysisdemonstrate thatthe SHIFR flow yields additional decrease on the local functional dissipation compared to the Hamiltonian flow in the vanilla information space. We propose effective particle system which evolve the position, weight, velocity of particles via a set of odes for the SHIFR flows with different underlying information metrics. By directly discretizing the propsed particle system, we obatain our GAD-PVI framework. Several effective instances of the GAD-PVI framework have been provided by employing three distinct dissimilarity functionals and associated smoothing approaches under the Wasserstein/Kalman-Wasserstein/Stein metric. Empirical studies demonstrate the faster convergence and reduced approximation error of GAD-PVI methods over the SOTAs. § ACKNOWLEDGMENTSThis work is supported by National Key Research and Development Program of China under Grant 2020AAA0107400 and National Natural Science Foundation of China (Grant No: 62206248). § APPENDIX A §.§ A.1 Definition of Information Metric in Probability SpaceTo make general information gradient flow on probability space defined, we briefly review thedefinition of information metrics in probability space <cit.>:(Information metric in probability space). Denote the tangent space at μ∈𝒫(^n ) by T_μ𝒫(^n) = {σ∈ C(^n):∫σ dx = 0 }. The cotangent space at μ is denoted as T_μ^* 𝒫(^n), can be treated as the quotient space C(^n)/ℝ. An information metric tensor G(μ)[·]:T_μ𝒫(^n)→ T_μ^* 𝒫(^n) is an invertible mapping from 𝒫(^n) to T_μ^* 𝒫(^n). This information metric tensor defines the information metric (as well as inner product) on tangent space 𝒫(^n), for σ_1,σ_2∈ T_μ𝒫(^n)and Φ_i = G(μ)[σ_i],i=1,2 asg_μ(σ_1,σ_2)=∫σ_1G(μ)σ_2dx = ∫Φ_1G(μ)^-1Φ_2dx, then we denote the general information probability space as (𝒫(^n),G(μ)). As long as a metric is specified, the probability space 𝒫(^n) together with the metric can be taken as an infinite dimensional Riemannian manifold which is so-called density manifold <cit.>, which enables the definition of gradient flow. §.§ A.2 Full Hamiltonian Flow on the IFR Space and the Fisher-Rao Kinetic EnergyTo develop ParVI methods which possess both accelerated position update and dynamical weight adjustment simultaneously, a natural choice is to directly simulate the Hamiltonian flow on the augmented IFR space By substituting the IFR metric (<ref>) into the general Hamiltonian flow (<ref>), we derive the Full Hamiltonian flow on the IFR space as the direct accelerated probabilistic flow on the IFR space with the form{ ∂_t μ_t =G(μ_t)^-1[ Φ_t]+(Ψ_t - ∫Ψ_tdμ_t)μ_t∂_tΦ_t+ γ_tΦ_t+1/2δ/δμ(∫Φ_tG(μ_t)^-1[Φ_t] dx)_Information kinetic energy+1/2Φ_t^2-Φ_t∫Φ_t dμ_t_Fisher-Rao kinetic energy + δℱ(μ_t) /δμ= 0. .where the Φ_t is the velocity field;the δℱ(μ_t) /δμ represent the potential energy dissipation; the 1/2δ/δμ(∫Φ_tG(μ_t)^-1[Φ_t] dx) represent the kinetic energy dissipation of information transport; the 1/2Φ_t^2-Φ_t∫Φ_t dμ_t represent the kinetic energy dissipation of Fisher-Rao distortion. As far as we know, the particle formulation of the Full Hamiltonian flow on the IFR space (<ref>) is intractable due to the Fisher-Rao kinetic energy term.When deriving particle systems, the ∇Φ_t can be straightforwardly approximated by the velocities ^i_t, but the Hamiltonian field Φ_t is hard to be approximated by finite points and iteratively updated.Actually, even the particle formulation of the accelerated Fisher-Rao flow has not been derived due to great difficulty <cit.>. Therefore, we ignore the influence of the Fisher-Rao kinetic energy on the Hamiltonian field and derive the SHIFR (<ref>). We point out that the Fisher-Rao kinetic energy would vanish as the flow converge to the equilibrium of (μ_∞=π,Φ_∞=0) which fit the behavior of kinetic energy in physical dynamic system. Therefore, the ignorance of the Fisher-Rao kinetic energy is tenable and the SHIFR still have the target distribution as its stationary distribution which will be shown in Proposition <ref>. §.§ A.3 Proof of Proposition <ref>First we Introduce a technical lemma for the proof of Proposition <ref>. The following probability flow dynamic formulation and particle system formulation is equivalent: { ∂_t μ_t +∇·(μ_t∇Φ_t) = 0 ∂_tΦ_t + γ_tΦ_t +1/2‖∇Φ_t‖ ^2 +δℱ(μ_t)/δμ=0. { d/dtX_t=V_td/dtV_t=-γ_tV_t-∇(δℱ(μ_t)/δμ)(X_t).We start with the calculation of gradient of kinetic term.For a twice differentiable Φ(x), we have:1/2∇‖∇Φ‖ ^2 =∇^2Φ∇Φ =(∇Φ·∇)∇ΦFrom (<ref>), we have:∂_tμ_t + ∇·(μ_t∇Φ_t)= 0which is the continuity equation of μ_t under vector field ∇Φ_t <cit.>.Hence, we have following equation on partical level (denoting V_t=∇Φ_t(X_t)):d/dtX_t =∇Φ_t(X_t)=V_tThen, the vector field shall follow:d/dtV_t =d/dt∇Φ_t(X_t)(1)=(∂_t+∇Φ_t(X_t)·∇)∇Φ_t(X_t)(2)=-γ_t∇Φ_t(X_t) -1/2∇‖∇Φ_t(X_t)‖ ^2 -∇(δℱ(μ_t)/δμ)(X_t)+(∇Φ_t(X_t)·∇)∇Φ_t(X_t)(3)=-γ_t∇Φ_t(X_t)-∇(δℱ(μ_t)/δμ)(X_t) (4)=-γ_tV_t-∇(δℱ(μ_t)/δμ)(X_t)where equation (1) becomes valid from material derivative in fluid dynamic <cit.>, equation (2) comes from the PDEs of Φ_t in (<ref>), equation (3) comes from cancelling terms on each side of (<ref>), equation (4) comes from the definition of V_t.Now we are ready to give the proof of Proposition <ref>.(Proof of Proposition <ref>) Let Ψ:𝒫_2(ℝ^d)→ℝ be a functional on the probability space,_t^M be the distribution produced by the continuous-time composite flow (<ref>) at time t.With μ_t denoting the mean-field limit of _t^M as M →∞, we have ∂_t Ψ[μ_t] = (ℒΨ)[μ_t], where, ℒΨ[μ] =∫⟨∇Φ(),∇_δΨ(μ)/δμ()⟩μ()d -∫δΨ(μ)/δμ() ( δℱ(μ)/δμ()- 𝔼_μ [δℱ(μ)/δμ()])μ() ,in which Φ_t abides{ ∂_tΦ_t+γ_tΦ_t +1/2‖∇Φ_t‖ ^2 +δℱ(μ_t)/δμ=0Φ_0=0. and δΨ(μ)/δμ(·) denotes the first variationof functional Ψ at μ satisfying ∫δΨ(μ)/δμ()ξ()= lim_ϵ→ 0Ψ(μ + ϵξ) - Ψ(μ)/ϵ for all signed measure ∫ξ() =0. Let (^PosΨ)[μ] be thefirst term of (<ref>), and (^WeiΨ)[μ] be the second term of (<ref>), We have:ℒΨ[μ] = (^PosΨ)[μ]+(^WeiΨ)[μ] For the measure valued composite flow _t^M (<ref>), the infinitesimalgenerator of Ψ w.r.t. μ̃_t^M is defined as follows: (_M Ψ)[^M] := lim_t→ 0^+ __0^M = ^M(Ψ[_t^M]) - Ψ(^M)/t, where __0^M = ^M(Ψ[_t^M]) denotes the expectation of the functional Ψ evaluated along the trajectory _t^M taken conditional on the initialization _0^M = ^M. As __0^M = ^M(Ψ[_t^M]) - Ψ(^M) =__0^M = ^M(Ψ[∑_i=1^M w^i_t δ_^i_t]) - __0^M = ^M( Ψ[∑_i=1^M w^i_0 δ_^i_t]) _weight adjusting infinitesimal+ __0^M = ^M( Ψ[ ∑_i=1^M w^i_0 δ_^i_t])-Ψ(∑_i=1^M w^i_0 δ_^i_0)_position adjusting infinitesimal, we follow the same idea from <cit.> todivide (_M Ψ)[^M] into two parts (_M^PosΨ)[^M] and(_M^WeiΨ)[^M] corresponding to the position update and the weight adjustment, respectively. According to the definition of first variation, it can be calculated that (_M^WeiΨ)[^M] = lim_t→ 0^+ __0^M = ^M(Ψ[∑_i=1^M w^i_t δ_^i_t]) -__0^M = ^M(Ψ[∑_i=1^M w^i_0 δ_^i_t])/t = ∫δΨ(μ^M)/δμ() ∑_i=1^M∂_t ρ(w^i_t ^i_0) = - ∫δΨ(μ^M)/δμ() (δℱ(μ^M)/δμ() -𝔼_μ^M [δℱ(μ^M)/δμ()])μ^M() . and (_n^PosΨ)[^M] = lim_t→ 0^+ __0^M = ^M(Ψ[∑_i=1^M w^i_0 δ_^i_t]) - Ψ(∑_i=1^M w^i_0 δ_^i_0)/t = ∫δΨ(μ^M)/δμ() ∑_i=1^M w^i_0 ∂_t ρ(^i_t) = ∫⟨V_μ^M(), ∇_δΨ(μ^M)/δμ()⟩μ^M()dwhere V_μ^M abides { dV_μ^M/dt = -γ_t V_μ^M-∇δℱ(μ^M)/δμV_μ^M_0=0. Combining the above equalities, we have ℒ_M Ψ[μ_M] = ℒ_M^PosΨ[μ_M] + ℒ_M^WeiΨ[μ_M] If we take the limit of ℒ_M Ψ[μ_M]as M →∞on a sequence such that μ^M ⇀μ(i.e. μ^M weakly converges to μ) a.s.,and δℱ(μ^M)/δμ⇀δℱ(μ)/δμ a.s., we can deduce thatV_μ^M⇀∇Φ in light of Lemma <ref>. Those allow to conclude thatℒ^Wei_M Ψ[μ_M] →ℒ^WeiΨ[μ] and ℒ^Pos_M Ψ[μ_M] →ℒ^PosΨ[μ],thus ℒ_M Ψ[μ_M] →ℒΨ[μ]. Since ∂_t Ψ(μ_t^M) = ℒ_M Ψ[μ^M] and ∂_t Ψ(μ_t) = ℒ_M Ψ[μ_t], we have lim_n→∞Ψ(μ_t^M) = Ψ (μ_t), which indicates that μ_t^M ⇀μ_t if μ_0^M ⇀μ_0. Since μ_t satisfying ∂_t Ψ(μ_t) = ℒΨ[μ_t]solves the partial differential equation (<ref>), we conclude that the pathof (<ref>) starting from ^M_0 weakly converges to a solution of the partial differential equation (<ref>)starting from μ_0 as M→∞. §.§ A.4 The Extra Decrease of Functional Dissipation of the SHIFR Flow (<ref>)Here, we investigate the extra decrease property in terms of functional dissipation of the SHIFR gradient flow (<ref>)comparing to the Hamiltonian flow (<ref>) in vanilla information space in the following proposition. Here we only illustrate the Wasserstein case for readers can easily check for general information space case in the same routine. For arbitrary μ∈𝒫(^n) and Φ∈ C(^n), The local dissipation of functional dℱ(μ_t)/dt following the SHIFR gradient flow (<ref>) starting from (μ,Φ) has an additional functional dissipation term comparing to the ones following the Hamiltonian flow in non-augmented space (<ref>).Take the Wasserstein case as an example. Denote the probability path starting from (μ,Φ)following W-SHIFR flow as (μ^ SHIFR_t,Φ^ SHIFR_t), following Hamiltonian flow in vanilla space as (μ^H_t,Φ^H_t). We have:.dℱ(μ^ SHIFR_t)/dt|_t=0.≤dℱ(μ^H_t)/dt|_t=0 For W-SHIFR case, according to the result in (<ref>), the following eqaulity holds for any functional Ψ:𝒫_2(ℝ^d)→ℝ on the probability space 𝒫_2(ℝ^d), where, ∂_t Ψ[μ^ SHIFR_t]=∫⟨∇Φ(),∇_δΨ(μ^ SHIFR_t)/δμ()⟩μ^ SHIFR_t()d-∫δΨ(μ^ SHIFR_t)/δμ() ( δℱ(μ^ SHIFR_t)/δμ()- 𝔼_μ [δℱ(μ^ SHIFR_t)/δμ()])μ^ SHIFR_t() ,in which .μ_t|_t=0=μ and Φ_t abides{ ∂_tΦ_t+γ_tΦ_t +1/2‖∇Φ_t‖ ^2 +δℱ(μ_t)/δμ=0.Φ_t|_t=0=Φ. By substituting Ψ(μ^ SHIFR_t) = (μ^ SHIFR_t), U_μ^ SHIFR_t = δ(μ^ SHIFR_t)/δμ and t=0 in the above equality, we have: .d(μ^ SHIFR_t)/dt|_t=0 = - ∫⟨∇_δ(μ)/δμ(),∇Φ()⟩μ()d- ∫δ(μ)/δμ() (U_μ() - 𝔼_μ [U_μ()])μ() =- ∫⟨∇_δ(μ)/δμ(),∇Φ()⟩μ()d-(∫(δℱ(μ)/δμ())^2μ()d - (∫δℱ(μ)/δμ()μ()d)^2). Similarly, we can get following result for the Hamiltonian flow in the non-augmented Wasserstein space case:.d(μ^H_t)/dt|_t=0 =- ∫⟨∇_δ(μ)/δμ(),∇Φ()⟩μ()d . Since the second term of (<ref>) is always less or equal to zero and the fisrt term of (<ref>) is the same as the first term of (<ref>), we can reach to the conclusion that the local dissipation of SHIFR flow has an additional functional dissipation term compared to the Hamiltonian flow in the non-augmented space:.dℱ(μ^ SHIFR_t)/dt|_t=0.≤dℱ(μ^H_t)/dt|_t=0For general information space, readers can follow the same routine to get the extra functional dissipation property.§.§ A.5 The stationary analysis of the SHIFR Flow (<ref>)Following proposition establish the stationary property of theSHIFR Flow (<ref>) with dissimilarity functional 𝒟(·|π) which vanishes at μ=π. The target distribution and zero-velocity (μ_∞=π,Φ_∞=0) (0 means a funtion defined on ^n that always map to zero) is the stationary distribution of theSHIFR flow (<ref>) with dissimilarity functional𝒟(·|π) which satisfy 𝒟(π|π)=0 with any Information metric tensor G^I(μ)[·].TheSHIFR flow under an Information metric writes: { ∂_tμ_t=G^I(μ_t)^-1[ Φ_t]-(δℱ(μ_t)/δμ- ∫δℱ(μ_t)/δμdμ_t)μ_t ,∂_tΦ_t+γ_tΦ_t+1/2δ/δμ(∫Φ_tG^I(μ_t)^-1[Φ_t] dx)+δℱ(μ_t) /δμ=0. .Because the functional ℱ(·) is specified as some dissimilarity functional 𝒟(·|π), we have:δℱ(μ_∞) /δμ =δℱ(π) /δμ =δ𝒟(π,π) /δμ =0. From the element of gradient flow, we also have:G^I(μ)^-1[ Φ_∞] =G^I(μ)^-1[ 0] =0.Substituting into (<ref>) that (μ_∞=π,Φ_∞=0), we can get :.∂_tΦ_t|_t=∞= -γ_∞Φ_∞-1/2δ/δμ(∫Φ_∞ G^I(μ_t)^-1[Φ_∞] dx) -δℱ(μ_∞) /δμ=0,.∂_tμ_t|_t=∞=G^I(μ_∞)^-1[ Φ_∞]-(δℱ(μ_∞)/δμ- ∫δℱ(μ_∞)/δμdμ_∞)μ_∞=0. These suffice for proof. § APPENDIX B §.§ B.1 Kalman-Wasserstein-SHIFR Flow and KWGAD-PVI AlgorithmsCombining the Kalman filter to estimate the probability distributions of a dynamic system over time and the Wasserstein metric to measure the difference between these estimated distributions, the Kalman-Wasserstein metric is proposed in ensemble Kalman sampling literature <cit.>. The inverse of Kalman-Wasserstein metric tensor write: G^KW(μ)^-1[Φ] = -∇· (μ C^λ(μ)∇Φ),Φ∈ T_μ^* 𝒫(^n),where Φ∈ T^*_μ𝒫(^n), substituting into (<ref>), the gradient flow of Kalman-Wasserstein metric writes:∂_t μ_t = G^KW(μ_t)^-1[δℱ(μ_t) /δμ] =-∇· (μ_t C^λ(μ_t) ∇δℱ(μ_t) /δμ) .where λ≥ 0 is the regularization constant and C^λ(μ) is the linear transformation follows:C^λ(μ) = ∫ (x-m(μ))(x-m(μ))^Tμ dx +λ I,m(μ) = ∫ xμ dx.Substituting the Kalman-Wasserstein metric into theSHIFR flow (<ref>) gives the Kalman-Wasserstein-SHIFR flow:{ ∂_t μ_t =-∇·(μ_t C^λ(μ_t) ∇Φ_t)-(δℱ(μ_t)/δμ- ∫δℱ(μ_t)/δμdμ_t)μ_t,∂_tΦ_t + γ_tΦ_t +1/2((x-m(μ_t))^T ∫∇Φ_t∇Φ_t^Tdμ_t (x-m(μ_t)) +∇Φ_t^TC^λ(μ_t)∇Φ_t ) +δℱ(μ_t)/δμ=0 . .We claim that the finite-particles formulation of the Kalman-Wasserstein-SHIFR flow (<ref>) evolves the positions x^i's, theweights w^i's of M particles and velocity fieldfollowing:{d^i_t=C^λ(μ̃_t) ^i_tdt,d^i_t= (-γ^i_t-[_t_t^T](^i-[] ) - ∇δℱ(μ̃_t)/δμ(^i_t) )dt,d w^i_t= -(δℱ(μ̃_t)/δμ(^i_t)- ∑_i=1^M w^i_t δℱ(μ̃_t)/δμ(^i_t))w^i_tdt, μ̃_t= ∑_i=1^M w^i_tδ_^i_t. .Here the expectation is taken over the empirical distribution of particles.Then, the proposition below show the mean-field limit of the finite-particles formulation (<ref>) is exactly the Kalman-Wasserstein-SHIFR flow (<ref>).Suppose the empirical distribution ^M_0 of M weighted particles weakly converges to adistribution μ_0 when M →∞.Then, the path of (<ref>) starting from ^M_0 and Φ_0 with initial velocity 0 weakly converges to a solution of the Kalman-Wasserstein-SHIFR gradient flow (<ref>) starting from μ_t|_t=0 = μ_0 = and Φ_t|_t=0 = 0 as M→∞:Similar to the proof scheme of proposition <ref>, we start from proofing a technical lemma first:The following fluid dynamic formulation and particle dynamic formulation is equivalent:(Suppose that X_t∼μ_t and V_t=∇Φ_t(X_t), expectation is taken over particles) { ∂_t μ_t +∇·(μ_t C^λ(μ_t) ∇Φ_t) = 0 ,∂_tΦ_t + γ_tΦ_t +1/2((x-m(μ_t))^T ∫∇Φ_t∇Φ_t^Tdμ_t (x-m(μ_t)) +∇Φ_t^TC^λ(μ_t)∇Φ_t ) +δℱ(μ_t)/δμ=0 . .{ d/dtX_t=C^λ(μ_t) V_t,d/dtV_t=-γ_tV_t -[V_tV_t^T](X_t-[X_t]) -∇(δℱ(μ_t)/δμ)(X_t) . .First we establish two equations, For i=1… n, we have:(C^λ(μ_t)∇Φ_t·∇)∇_iΦ_t(X_t)=∑_j = 1^n(C^λ(μ_t)∇Φ_t)_j∇_j∇_iΦ_t(X_t)=∑_j = 1^n∇_ijΦ_t(X_t)(C^λ(μ_t)∇Φ_t)_j=(∇^2Φ_tC^λ(μ_t)∇Φ_t)_i.Them, according to chain rule, we have:∇(∇Φ_t(x)^TC^λ(μ_t)∇Φ_t(x)) =2∇^2Φ_t(x)C^λ(μ_t)∇Φ_t(x).Since the first equation of (<ref>) is actually the continuity equation with velocityfield C^λ(μ_t) ∇Φ_t, it is obvious to have d/dtX_t=C^λ(μ_t) V_t. Then we deduce the second equation of (<ref>):d/dtV_t =d/dt∇Φ_t(X_t)(1)=(∂_t+C^λ(μ_t)∇Φ_t·∇)∇Φ_t(X_t)(2)=∂_t∇Φ_t+ ∇^2Φ_tC^λ(μ_t)∇Φ_t (3)=-γ_t∇Φ_t -∫∇Φ_t∇Φ_t^Tdμ_t (x-m(μ_t)) -1/2∇(∇Φ_t(x)^TC^λ(μ_t)∇Φ_t(x)) -∇(δℱ(μ_t)/δμ)(X_t) +∇^2Φ_tC^λ(μ_t)∇Φ_t (4)=-γ_t∇Φ_t -∫∇Φ_t∇Φ_t^Tdμ_t (x-m(μ_t)) -∇(δℱ(μ_t)/δμ)(X_t) (5)=-γ_tV_t -[V_tV_t^T](X_t-[X_t]) -∇(δℱ(μ_t)/δμ)(X_t).where equation (1) becomes valid from material derivative in fluid dynamic <cit.>, equation (2) comes from the equation (<ref>), equation (3) comes from the PDE (<ref>), equation (4) comes from cancelling terms on each side of (<ref>), equation (5) comes from the definition of V_t and X_t. (Proof of Proposition <ref>) Substituting Lemma <ref> by Lemma <ref>, the proof scheme of proposition <ref> is the same with the proof scheme of proposition <ref>. By discretizing (<ref>), we derive the KWGAD-PVI algorithms which update the positions of particles according to the following rule: ^i_k+1 = ^i_k + η_pos[C^λ_k_k]^i,and adjusts the velocity field as following:_k+1^i=(1-γη_vel)_k^i -η_vel/M[∑_j=1^Nw_k^j(V_k^j)(V_k^j)^T](_k^i-m_k) -η_vel∇ U_μ̃_k(_k).Here, C^λ_k and m_k are calculated at each round by:m_k= 1/N∑_i=1^Mw_k^i x^i_k, C^λ_k=1/N-1∑_i=1^M w_k^i(_k^i-m_k) (_k^i-m_k)^T+λ I.§.§ B.2 Stein-SHIFR Flow and SGAD-PVI AlgorithmsInvolving reproducing kernel Hilbert space norm into probability space, the Stein metric is proposed for geometrical analysis <cit.>. The gradient flow of Stein metric writes:∂_t μ_t = G^S(μ_t)^-1δℱ(μ_t) /δμ =-∇· (μ_t∫ k(·,y)μ_t(y)∇_yδ(μ_t)/δμ(y)dy ).Substituting the Stein metric into theSHIFR flow (<ref>) gives the Stein-SHIFR flow:{ ∂_t μ_t =-∇· (μ_t∫ k(·,y)μ_t(y)∇_yΦ_t(y)dy ) -(δℱ(μ_t)/δμ- ∫δℱ(μ_t)/δμdμ_t)μ_t,∂_tΦ_t + γ_tΦ_t +∫∇Φ_t(·)^T∇Φ_t(y)k(·,y)μ_t(y)dy +δℱ(μ_t)/δμ=0 . .We claim that the finite-particles formulation of the Stein-SHIFR flow (<ref>) evolves the positions x^i's, theweights w^i's of M particles and velocity fieldfollowing:{d^i_t=[∫ k(_t,y)∇Φ_t(y)μ̃_t(y) dy]^idt,d^i_t= (-γ^i_t-[∫_t^T∇Φ_t(y)∇_xk(_t,y)μ̃_t(y)dy]^i - ∇δℱ(μ̃_t)/δμ(^i_t) )dt,d w^i_t= -(δℱ(μ̃_t)/δμ(^i_t)- ∑_i=1^M w^i_t δℱ(μ̃_t)/δμ(^i_t))w^i_tdt, μ̃_t= ∑_i=1^M w^i_tδ_^i_t. . Then, the proposition below show the mean-field limit of the finite-particles formulation (<ref>) is exactly the Stein-SHIFR flow (<ref>).Suppose the empirical distribution ^M_0 of M weighted particles weakly converges to adistribution μ_0 when M →∞.Then, the path of (<ref>) starting from ^M_0 and Φ_0 with initial velocity 0 weakly converges to a solution of the Stein-SHIFR gradient flow (<ref>) starting from μ_t|_t=0 = μ_0 = and Φ_t|_t=0 = 0 as M→∞: Similarly, fist proof a technical lemma:The following fluid dynamic formulation and particle dynamic formulation is equivalent:(Suppose that X_t∼μ_t and V_t=∇Φ_t(X_t)) { ∂_t μ_t +∇· (μ_t∫ k(·,y)μ_t(y)∇_yΦ_t(y)dy )=0, ∂_tΦ_t + γ_tΦ_t +∫∇Φ_t(·)^T∇Φ_t(y)k(·,y)μ_t(y)dy +δℱ(μ_t)/δμ=0 . .{ d/dtX_t = ∫ k(X_t,y)∇Φ_t(y)μ_t(y)dy, d/dtV_t=-γ_tV_t -∫ V_t^T∇Φ_t(y)∇_xk(X_t,y)μ_t(y)dy -∇(δℱ(μ_t)/δμ)(X_t) . .First we notice the following equation:∇ (∫∇Φ(x)^T∇Φ(y)k(x,y)μ_t(y)dy)=∇^2Φ(x)∫∇Φ(y)k(x,y)μ(y)dy +∫∇Φ (x)^T∇Φ (y)∇_xk(x,y)μ(y)dy.Since the first equation of (<ref>) is actually the continuity equation with velocityfield ∫ k(·,y)μ_t(y)∇_yΦ_t(y)dy, it is obvious to have d/dtX_t = ∫ k(X_t,y)∇Φ_t(y)μ_t(y)dy. Then we deduce the second equation of (<ref>):d/dtV_t =d/dt∇Φ_t(X_t)(1)=∂_t∇Φ_t(X_t) +∇^2Φ_t(X_t)(∫ k(X_t,y)∇Φ_t(y)μ_t(y)dy) (2)= -γ_tV_t -∇ (∫∇Φ(x)^T∇Φ(y)k(x,y)μ_t(y)dy) -∇(δℱ(μ_t)/δμ)(X_t) +∇^2Φ_t(X_t)(∫ k(X_t,y)∇Φ_t(y)μ_t(y)dy) (3)= -γ_tV_t -∫∇Φ (x)^T∇Φ (y)∇_xk(x,y)μ(y)dy -∇(δℱ(μ_t)/δμ)(X_t) (4)=-γ_tV_t -∫ V_t^T∇Φ_t(y)∇_xk(X_t,y)μ_t(y)dy -∇(δℱ(μ_t)/δμ)(X_t) .where equation (1) becomes valid from material derivative in fluid dynamic <cit.>, equation (2) comes from the PDE (<ref>), equation (3) comes from leveraging (<ref>), equation (4) comes from the definition of V_t.(Proof of Proposition <ref>) Substituting Lemma <ref> by Lemma <ref>, the proof scheme of proposition <ref> is the same with the proof scheme of proposition <ref>. By discretizing (<ref>) Stein-GAD-PVI algorithmupdates the positions of particles according to the following rule: ^i_k+1 = ^i_k +η_pos/M∑_j=1^MK(^i_k,^j_k) _k^i,and adjusts the velocity field as following:_k+1^i=(1-γη_vel)_k^i -η_vel/M∑_j=1^M(_k^i)^T_k^j∇_1K(^i_k,^j_k) -η_vel∇ U_μ̃_k(_k).§.§ B.3 GAD-PVI Algorithms in details various Dissimilarity Functionals and Smoothing Approaches To develop practical GAD-PVI methods, we must first select a dissimilarity functional .Once a dissimilarity functionalhas been chosen,we need to select a smoothing approach to approximatethe first variation of the empirical approximation,as the value of δ(·)/μ at anempirical distribution μ̃ = ∑_i=1^M w^i δ_^i is generally not well-defined.Smoothing strategies allow us to approximate the first variation value at the discrete empirical distribution. Generally, a smoothed approximation to the first total variation is denoted as U_μ̃(·) δℱ(μ̃)/δμ(·). The commonly BLOB (with KL-divergence as ) <cit.> has been introduced in (<ref>), now we give detailed formulations of GFSD (with KL-divergence as ) <cit.> and KSDD (with Kernel Stein Discrepancy as ) <cit.>, which are all compatible with our GAD-PVI framework.KL-GFSD In order to deal with the intractable logμ() of the first variation of the KL divergence, GFSD directly approximate μ by smoothing the empirical distribution μ̃ with a kernel function K:μ̂ = μ̃*K= ∑_i=1^Mw^iK(·, ^i), which leads to the following approximations: U_μ̃_k ()= - logπ() + log∑_i=1^M w^i_k K(,^i_k), ∇ U_μ̃_k () = -∇logπ() + ∑_i=1^M w^i_k∇_K(, ^i_k)/∑_i=1^M w^i_k K(,^i_k). In the above approximations, we call the terms defined through the interaction with other particles as the repulsive terms. It can be observed that the BLOB-type approximations (<ref>) have an extra repulsive term (the term in the second line) compared to the GFSD-type approximations (<ref>) and (<ref>). Practically, this extra repulsive term would drive the particles away from each other further, and result in a better exploration of particles in the probability space. Actually, the BLOB-type methods usually outperforms the GFSD-type methods empirically.Since GFSD and BLOB (partly) smooth the original empirical distribution μ̃ with a kernel function K, the underlying evolutionarydistribution is actually a smoothed version of μ̃.To update the positions and the weights in the smoothed empirical distribution, one should solve a system of linearequations to obtain the new positions ^i_k+1's and weights w^i_k+1's in the k-th iteration. Nevertheless, with a proper kernel function K, such as the RBF kernel, the density μ(^i) at a given position ^i mainly comes from its corresponding weight w^i. Actually, as the RBF kernel e^-hx- x_i^2 approaches 0 when x becomes far from x_i and equals 1 when x = x_i,it can be observed that the density at x_i mainly from w^i. Hence, we can still update the positions and weights in a splitting scheme respectively. This approximation performs well in practice.KSD-KSDD Except for the KL-divergence, KSD is recently adopted as the dissimilarity functional in the non-accelerated fixed-weight ParVI method KSDD <cit.>,whose first variation and the corresponding vector field are defined as[δℱ(μ)/δμ() =𝔼_'∼μ[k_π(',)],; ∇δℱ(μ)/δμ() =𝔼_'∼μ[∇_k_π(',)]. ]Here, k_π denotes the Stein kernel <cit.>, and it is defined by the score of π: s() = ∇logπ() and a positivesemi-definite kernel function K:k_π(,) = s()^Ts()K(,) + s()^T∇_ K(, ) + ∇_ K(,)^T s() +∇_·∇_ K(,). The the first variation and its gradient in (<ref>) can be directly approximated via the empirical distribution μ̃. We construct the following finite-particle approximations: U_μ̃_k () =∑_i=1^M w^i_kk_π(^i_k, ), ∇ U_μ̃_k () = ∑_i=1^M w^i_k ∇_ k_π(^i_k, ).The Detailed GAD-PVI algorithmsAdopting different underlying information metric tensors (W-metric, KW-metic and S-metric), weight adjustment approaches(CA and DK) and dissimilarity functionals/associated smoothing approaches(KL-BLOB, KL-GFSD and KSD-KSDD), we can derive 18 different intances of GAD-PVI, named as WGAD/KWGAD/SGAD-CA/DK-BLOB/GFSD/KSDD. Here we present our General Accelerated Dynamic-weight Particle-based Variational Inference (GAD-PVI) framework, in a more detailed version of Algorithm <ref> as Algorithm <ref>.§ APPENDIX CIn this section we list the details of experiments setting, parameter tuning and additional results of our empirical studies.§.§ C.1 Experiments Settings Density of the Gaussian mixture model. The density of the Gaussian mixture model is defined as follows: π()∝2/3exp(-1/2-^2) + 1/3exp(-1/2 + ^2),where = 1.2 * 1. Density of the Gaussian Process task. We follow the experiment setting in <cit.>, and use the dataset LIDAR (denoted as 𝒟 = {(x_i,y_i)}^N_i=1) which consists of 221 observationsof scalar variable x_i and y_i. Denote = [x_1, x_2,...,x_N]^T and = [y_1, y_2, ..., y_N]^T, the target log-posterior w.r.t. the model parameterϕ = (ϕ_1, ϕ_2) is defined as follows: logp(ϕ|𝒟) = -^T ^-1_y /2-log(_y)/2-log(1+^T).Here, _y is a covariance function _y =+ 0.04𝐈 with _i,j = exp(ϕ_1)exp(-exp(ϕ_2)(x_i - x_j)^2) and 𝐈 represents the identity matrix. Training/Validation/Test dataset in Bayesian neural network. For each dataset in the Bayesian neural network task,we split it into 90% training data and 10% test data randomly,which follows the settings from <cit.>.Besides, we also randomly choose 1/5 of the training set as the validation set for parameter tuning.Initialization of particles' positions. In the Gaussian mixture model, we initialize particles according to the standard Gaussian distribution. In the Gaussian process regression task,we initialize particles with mean vector [0, -10]^T and covariance 0.09*𝐈_2× 2 for all the algorithms. As for the Bayesian neural network task, we follow the initialization convention in <cit.>.Bandwidth of Kernel Function in Different Algorithms For all the experiments,we adopt RBF kernel as the kernel function K: K(, ) = exp(--^2_2/h),where the parameter h is known as the bandwidth <cit.>.We follow the convention in <cit.> and set the parameter h = 1/M∑^M_i=1(min_j≠ i^i - ^j^2_2) forGFSD-type algorithms andBLOB-type algorithms. WNES and WAG <cit.> follows the accelerated gradient descend methods in the Wasserstein propability space <cit.> and derives the WNES and WAG methods, which update the particles' positions with an extra momentum. Though their methods have WNES and WAG type, we only conduct empirical studies of WNES as baseline because the authors report WNES algorithms are usually more robust and efficient than WAG type algorithms<cit.>. §.§ C.2 Parameters TuningDetailed Settings for η_pos, η_wei, η_vel and γ Here we present the parameter settings for position adjusting step-size η_pos, weight adjusting step-size η_wei, velocity field adjusting step-size η_vel, velocity damping parameter γ of different algorithms are provided in Table <ref>, <ref>, and <ref>.All the parameters are chosen by grid search. For the position adjusting step-size η_pos, we first find a suitable range by a coarse-grain grid search and then fine tune it. Note that, the position step-size are tuned via grid search for the fixed-weight ParVI algorithms, then used in the corresponding dynamic-weight algorithms. The acceleration parameters and weight adjustment parameters are tuned via grid search for each specific algorithm. As a result, it can be observed that the position adjusting step-size for any specific fixed-weight ParVI algorithm, its corresponding dynamic algorithm and the DK variant are the same in these tables. For ease of understanding, we use the rate of weight adjusting step-size η_wei divided by the position adjusting step-size η_pos to illustrate the tuning. Moreover, inspired by the effective warmup strategy in tuning hyper-parameters, we follow the settings of <cit.> and construct the weight adjusting step-size parameter schedule using the hyperbolic tangent: λtanh(2*(t/T)^5), with t being the current time step and T the total number of steps.§.§ C.3 Additional Experiments Results Results for SG In this section, we give empirical results on approximating a single-mode Gaussian distribution, whose density is defined as: π()∝exp(-1/2^TΣ^-1),where Σ_ii = 1.0 and correlation Σ_ij, i≠ j = 0.8.To investigate the influence ofnumber M in this task, we run all the algorithms with M∈{32, 64, 128, 256, 512}. All the particles are initialized from a Gaussian distribution with zero mean and covariance matrix 0.5*𝐈_10× 10.In Figure <ref> and <ref>, we plot the W_2 distance to the target of the samples generated by each algorithm w.r.t. iteration and time. We generate 5000 samples from the target distribution π as reference to evaluate W_2. As this task is a simple Single Gaussian model, the approximation error difference between the GAD-PVI algorithms and fixed-weight ones is not so obvious. When particles number gets more, the effect of the dynamic weight adjustment scheme is smaller, for such a large number of fix-weight particles is also sufficed for approximating this simple distribution. Meanwhile, the faster convergence effect of the accelerated position update strategies is quite obvious in the figures. Moreover, we can see that the GFSD-type algorithms cannot outperform the BLOB-type algorithms and SVGD, this may due to the lack of the repulsive mechanism in GFSD which lead to particle system collapse in single-mode task. This also coincide with the discussion in Appendix B.3.In Table <ref>, we further report the final W_2 distance between the empirical distribution generated by each algorithm and the target distribution. It can be observed that, in the Wasserstein metric case, CA strategy constantly outperform their fixed-weight counterparts with the same number of particles, the DK variants are weakened due to single-modality of this task. However, in the KW or S metric case, it can be observed that the GAD-DK algorithms outperform than others in majority of cases, this is due to the poor transportation ability of KW and S metric, a direct duplicate/kill mechanism greatly enhance the transport speed from low probability region to high probability region. We also find that the KW and Stype algorithms achieve poor approximation result comparing to WGAD algorithms in terms of the Wasserstein distance to reference points, this may because the WGAD algorithms aim to implicitly minimize the Wasserstein distance, and this is not the case for other two type.Additional Results for GMM We provide additional results for Gaussian Mixture Model experiment.In Figure <ref> and Figure <ref>, we plot the W_2 w.r.t iteration of each algorithm for all M={32,63,128,256,512}. From these figures, we can observe that, compared with the baseline ParVI algorithms, our GAD-PVI of either CA strategy or DK variants result in a better performance, i.e., less approximation error and faster convergence. Actually, as we have discussed in the methodology part, while the weight-adjustment step in GAD-PVI greatly enhances the expressiveness of particles' empirical distribution, the accelerated position update strategy also bring in faster convergence. From Figure <ref>, we find that DK variants decrease quite fast at first in the WGAD case, that is because the duplicate/kill scheme will greatly enhance the particle transport ability thus move more particles to high probability region at first comparing to moving particle step by step. In the Figure <ref>, we observe that DK variants also show better result comparing to other algorithms, this may because of the slow transportation property of KW and S type algorithms.In Table <ref>, we further report the final W_2 distance between the empirical distribution generated by each algorithm and the target distribution. It can be observed that GAD-PVI algorithms constantly achieve better approximation result than existing algorithms. Notably, for this complex multi-mode task, DK variants show their advantage as the duplicate/kill operation allows transferring particles from low-probability region to distant high-probability area (e.g. among different local modes) especially in KW/S case. However, the DK variants in Wasserstein case are not so competitive with CA algorithms, this difference could lie in that the KW/S metric space are much more influenced by the potential barrier and need DK scheme to transport particles among different local modes but Wasserstein metric are more robust from multi-modality and CA strategy suffice. Besides, the GAD-PVI algorithms with the CA strategy are usually more stable than their counterpart with DK, which may be ascribed to the fluctuations induced by the discrete weight adjustment (0 or 1/M) in DK.Additional Results for GP Here, we provide additional results of KW/S-type method for Gaussian Process Regression task in Table <ref>. The result is quite similar to the Wasserstein case, i.e., both the accelerated position update and the dynamic weight adjustment result in a decreased W_2 and GAD-PVI algorithms consistently achieve loweset W_2 to the target. Note that difference between DK type GAD-PVI and their fixed weight counterpart is not that obvious, due to the fact that the one-mode nature of GP greatly weaken the advantage of DK, i.e., transferring particles from low-probability region to distant high-probability area (e.g. among different local modes).Additional Results for BNN We provide additional test Negative Log-likelihood results for Bayesian Neural Network experiment for all algorithms in Table <ref> and the test RMSE result under KW/S-Type algorithms in Table <ref>. The results demonstrate that the combination of the accelerated position updating strategy and the dynamically weighted adjustment leads to a lower NLL and RMSE under difference specific IFR space, and WGAD-PVI algorithms with CA usually achieve the best performance in Wasserstein case while KW/SGAD-PVI algorithms with CA or DK are comparable to each others. Note that the position step-size of GAD-PVI are set to the value tuned for their fixed weight counterpart. Actually, if we retune the position step-size for all GAD-PVI algorithms, they are expected to achieve a better performance than existing result. Results for GAD-KSDD Newly derived KSDD methods proposed by <cit.> evolve particle system according to the direct minimizing the Kernel Stein Discrepancy(KSD) of particles w.r.t. the target distribution. The KSDD method is the first ParVI that introduce the dissimilarity functional whose first variation is well-defined at discrete empirical distribution, thus result in no approximation error when particles number is infinite. Though theoretically impressive, the experimental performance of KSDD is not satisfying, for they are more computationally expensive and have been widely reported to be less stable. Furthermore, KSDD are also reported to make particles easily trapped at saddle points and demanding for convexity of task and sensitive to parameters.As shown in Figure <ref>, we make simple Wasserstein experiments of KSDD type algorithms on SG task to illustrate that our GAD-PVI framwork is compatible with the KSD-KSDD approach. The bandwidth of KSDD are reported that should be carefully determined<cit.>,we follow the conventions in <cit.> and <cit.>, and set the parameter h via grid search. From Figure <ref>, it can be observed that our GAD-PVI algorithms achieve the best result in different particles number settings. These illustrate our framework can corporate with this new smoothing approach. However, due to the limit of the KSDD itself, it is not realistic to fine tune parameters and conduct empirical studies of KSDD ParVI algorithms on complex tasks. Additionally, in this simple SG task, the final result of KSDD type algorithms is not competitive to other methods at all. So we exclude KSDD type experiments in GMM, GP and BNN. | http://arxiv.org/abs/2312.16429v1 | {
"authors": [
"Fangyikang Wang",
"Huminhao Zhu",
"Chao Zhang",
"Hanbin Zhao",
"Hui Qian"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231227063106",
"title": "GAD-PVI: A General Accelerated Dynamic-Weight Particle-Based Variational Inference Framework"
} |
decorations.pathmorphing,shadows,fadings [ [ Tr ℒℋℛℳ𝒟ℱ𝒜∂ trTr Connectomes as Holographic States Dmitry Melnikov=================================International Institute of Physics, Federal University ofRio Grande do Norte, Campus Universitário, Lagoa Nova, Natal-RN 59078-970, Brazil We use the topological quantum field theory description of states in Chern-Simons theory to discuss the relation between spacetime connectivity and entanglement, exploring the paradigm entanglement=topology. We define a special class of states in Chern-Simons with properties similar to those of holographic states. While the holographic states are dual to classical geometries, these connectome states represent classical topologies, which satisfy a discrete analog of the Ryu-Takayanagi formula and characteristic inequalities for the entanglement entropy. Generic states are linear combinations of connectomes, and the theory also has nonperturbative states which are global spacetime defects formed by a large number of quantum fluctuations. Topological presentation of quantum states and emergence of topology from entanglement may be useful for building a generalization to geomentry, that is quantum gravity.Thinking of further quantum gravity comparisons we discuss replica wormholes and conclude that similar objects exist beyond gravitational theories. The topological theory perspective suggests that the sum over all wormholes is always factorizable, even though the individual ones might not be. § INTRODUCTION Chern-Simons theory is an example of topological quantum field theories (TQFT) which map topological spaces into instances of quantum mechanics <cit.>. In the case of a compact gauge group it is also an example of fully solvable quantum field theory, in which any given correlation function can be explicitly computed <cit.>. The price paid for the solvability is the fact that the theory does not have dynamical degrees of freedom, which makes it too trivial for addressing important dynamical phenomena. Nevertheless, Chern-Simons theory is not completely trivial, and the purpose of this paper is to revisit some basic facts and techniques in order to offer a complementary perspective on states in quantum gravity and the emergence of spacetime.The specific aspects that will be considered in this paper are related to holography <cit.>, and in particular, to the properties of quantum states described by semiclassical and classical gravity. Loosely speaking holographic duality conjectures an equivalence between quantum gravity or strings in an anti de Sitter kind of spacetime and a conformal field theory (CFT) on the boundary of that space. In the regime, in which the spacetime curvature is small, gravity, which becomes classical, probes the strongly coupled regime of the dual CFT. Among the questions that are commonly asked in the perspective of this regime are how general the quantum states described by classical gravity are, how quantum corrections to classical gravity state are computed and what happens with the spacetime in the quantum regime, e.g. <cit.>. Here we will touch upon similar questions about states and Chern-Simons theory with a compact group <cit.> and observe properties analogous to that of holographic states. In particular we will propose a set of states (topologies) in Chern-Simons theory that are analogs of classical gravity states or classical geometries. The main motivation to compare Chern-Simons states and gravity states is the fact that classical gravity in three dimensions has a formulation as a Chern-Simons theory with noncompact group SL(2,R)× SL(2,R) <cit.>. The status of this correspondence remains unclear at the quantum level, since no consensus is reached as of yet about the existence of a well defined quantum versions of either of the two theories <cit.>. Compact group Chern-Simons looks interesting in this respect since it may share certain properties with the noncompact version, and besides, is fully computable. Moreover, the compact group Chern-Simons is also holographic, since it has an equivalent description in terms of states of two-dimensional Wess-Zumino-Witten (WZW) theory on the boundary <cit.>.The class of Chern-Simons states that will be the main focus of the paper are defined in terms of open Wilson lines connecting two-dimensional boundaries of three-dimensional spaces. In TQFT open spaces correspond to states in Hilbert spaces determined by the topology of space's boundary. In particular, disconnected boundaries correspond to tensor products of the respective Hilbert spaces <cit.>. Given the distribution of the endpoints of the Wilson lines on the boundaries we will be concerned with the situation which realizes the simplest topology of the connection of the endpoints in the bulk <cit.>. Equivalently, we would like to think of the states as defined by the adjacency matrix of a graph, whose nodes correspond to the disconnected components of the boundary, and edges – to the Wilson lines connecting the boundaries. As in the previous work we will refer to these states as connectomes owing the terminology to similar graphs in neuroscience.The definition of connectomes above is ambiguous, since it is not in general obvious which topology is the “simplest”. For a bipartite system (with two boundary components) the simplest topology can be expressed by planar graphs. However, the planar graph definition cannot be extended to an arbitrary number of parties. It turns out that there is an unambiguous way of defining the simplest topologies in terms of the classical limit of the Chern-Simons theory. In this limit the simplest topologies appear as equivalence classes of local braiding of the Wilson lines. Consequently, we will call connectomes those quantum states that survive the classical limit. We can thus also call them classical topologies, by analogy with classical geometries.We will show that classical topologies share similar properties with classical geometries. One such property is the Ryu-Takayanagi (RT) formula, which calculates the von Neumann entropy of a subregion on the boundary in terms of a minimal area surface in the bulk. We will show that connectome states satisfy a discrete version of the RT formula, similar to the one appearing in the tensor network construction of holography <cit.> or in the bit-thread models <cit.>.Away from the classical regime, the RT formula receives corrections, whose form can be obtained from the full solution.Besides the entropy formula, connectome states apparently satisfy the same inequalities for the entanglement entropy as the classical geometries. Such inequalities were extensively studied in the literature: see <cit.> for some original proposals and <cit.> for the current state of the art and references. Similar inequalities for the connectome states were discussed in <cit.>, where a number of examples were checked, all of them satisfying the identities (saturating them in some cases). It was then conjectured that connectomes satisfy the same inequalities as the holographic states, which implies that the former constitute a subclass of the latter.We will then see that connectomes are “perturbative” classical topologies, since there are also nonconnectome topologies that survive the classical limit. These “nonperturbative” topologies are characterized by three-dimensional defects introduced in the bulk. Employing the surgery operation we will see that classical topologies with defects are obtained as a sum over a large number of perturbative topologies.The findings of this paper are consistent with a picture of emergence of a discrete bulk spacetime from entanglement.[In fact, Chern-Simons theory provides a realization of Penrose's spin network <cit.>, which is a discrete model of emergent spacetime.] In order to have the space connected, one needs to have Wilson lines, which ensure that quantum states are entangled. Nonconnectome topologies describe fluctuations of the spacetime, which can, among other things, forge links between the disconnected pieces. Such quantum topologies are themselves linear combinations of the classical ones.As one development of the last property we will also apply the Chern-Simons perspective to the problem of replica wormholes in gravity theories. It was previously observed that inclusion of certain topologies in the calculation of semiclassical path integrals of gravity by the replica method helps solve different paradoxes in black hole physics <cit.>. A notorious case is the Hawking information paradox <cit.>, in which classical saddles corresponding to a nontrivial topology connecting different replicas are shown to ensure the restoration of information at late times of the black hole evolution <cit.>.Since the replica wormhole story involves summing over topologies one might expect that TQFT should provide basic examples of the corresponding mechanism. We will arrive to the conclusion that replica wormholes are not specific to gravitational theories. Moreover, the TQFT perspective suggests that relevant wormholes are always factorizable in the full quantum picture, although their contribution may appear as a sum of non-factorizable classical topologies. Finally, the change of dominance of different replicas in the path integral is a matter of choice of an appropriate evolution Hamiltonian, external to the TQFT. So, as far as the black hole information paradox is concerned, replica wormholes are only consequences of the choice of the Hamiltonian rather than themselves solutions to the paradox.The paper is organized as follows. In section <ref> we give a minimal review of the necessary details about states in Chern-Simons theory. In section <ref> we review the basic necessary information to work with quantum mechanics based on Chern-Simons theory. Section <ref> sets the stage for quantum information applications,introduces the class of connectome states and discusses their properties. In section <ref> we revisit the entanglement entropy calculation: for connectome states in section <ref>, where we demonstrate the validity of the RT formula; and for generic states in <ref>, where we give some examples of corrections to the RT formula. In section <ref> we discuss the classical limit of the Chern-Simons states and show that all perturbative states reduce to connectomes in the classical limit. In section <ref> we discuss non-perturbative states, i.e. states with global defects in the three-dimensional topology. We also discuss the RT formula in the presence of such defects. In section <ref> we discuss replica wormholes in the context of Chern-Simons theory. In section <ref> we give another summary of the observations and results obtained in this paper. § CONNECTOME STATES IN CHERN-SIMONS THEORY §.§ Basics of Chern-Simons TQFT Let us briefly introduce a specific quantum mechanical system, which realizes a TQFT. We will focus on the example of SU(2)_k Chern-Simons, but the discussion can be generalized to other examples of TQFT. More details about the specific construction can be found in <cit.>. For formal introduction of TQFT see <cit.>.Quantum states of 3-dimensional Chern-Simons theory are represented by 3-manifolds with 2-dimensional boundaries. An example of such states are the following two vectors:|0̂⟩ =[ < g r a p h i c s > ] ≡[ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) arc (-90:90:0.15cm) – (0,0.3);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); ] , |1̂⟩ =[ < g r a p h i c s > ]≡ [ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.1,0) arc (-90:90:0.45cm) – (0,0.9);(0,0.6) – (0.1,0.6) arc (90:-90:0.15cm) – (0,0.3); ] .In this example the 2-manifold is a 2-sphere and the 3-manifold is a 3-ball filling the sphere. The spheres have punctures, and the punctures are extended into three-dimensional bulk as Wilson lines. In what follows we will consider Wilson lines in the fundamental representation of SU(2). The above example also illustrates that connecting punctures in inequivalent ways in general produces linearly independent states in the Hilbert space.That manifolds of (<ref>) represent states follows from the axioms of TQFT <cit.>. The same can be understood in the path integral definition, |Ψ(Σ,γ_j,R_j)⟩ =∫𝒟A|_A(Σ)=A_Σ e^iS_ CS[,Σ,γ_j,R_j] .Here the path integral is calculated over a 3-manifoldwith boundary Σ, with prescribed boundary conditions A_Σ for the gauge fields on the boundary. The Chern-Simons theory action may contain insertions of open and closed Wilson lines γ_j in representation R_j.Axioms and the path integral realization readily explain how the inner product in the Hilbert space is computed. One must glue two 3-manifolds over a common boundary, so that the result is a (path integral over) closed 3-manifold, i.e. a ℂ-number. The result is non-trivial only if the Wilson line ending on the boundary of two 3-manifold match, so that all Wilson lines of the closed 3-manifold are closed. This is generalized to the definitions of linear operators, tensors, traces and contractions in a straightforward way.Matching the Wilson lines in the internal product means that both factors belong to the same Hilbert space (or rather to a Hilbert space and its dual). This Hilbert space is completely determined by the boundary Σ (including the punctures). For n punctures this Hilbert space is isomorphic to the space of n-point conformal blocks of SU(2)_k WZW theory. In particular the dimension of this Hilbert space is given by the number of singlets in a tensor product of the representation of the n punctures. A simple analogy is the requirement that the total SU(2) spin of the punctures should be zero. For n spin-half punctures this is given by the Catalan number _n =C_n/2 =(n)!/((n/2)+1)!(n/2)! ,k>n/2 .There is an important subtlety here. The counting must be applied not to regular SU(2) irreps but to integrable representations of SU(2)_k WZW theory. In the latter available spin j representations are limited by j≤ k/2. Higher spin representation simply do not exist. This implies that for general k_n≤C_n/2. One of the consequences is that states given by different topology might be linearly dependent. However, if one works with sufficiently large k, as equation (<ref>) suggests, the above counting is correct. In the paper we will in fact be interested in the limit k→∞.In examples (<ref>) we used a single S^2 as the boundary Σ. However, Σ can be any two-dimensional surface, as in the case of knot complement states studied in <cit.>, or any disjoint union of two-dimensional surfaces. Here we will focus on states given by Σ that is a collection of non-intersecting S^2 boundaries. To simplify the diagrams we will not try to draw 3-manifolds, nor their boundary spheres. The location of the boundaries will instead be indicated by grouping the endpoints of the Wilson line, as in equation (<ref>).Finally, we will need a technology to compute the overlap of states, or more generally, correlation functions. As is famously known <cit.> correlation functions of Wilson loop operators in SU(2) Chern-Simons theory are given by the Jones polynomials <cit.> of the knots (links) following the Wilson loop paths, for which various methods of computation exist. We will review the simplest prescription to compute them using the variation of Conway skein relations due to Kauffman <cit.>.To compute the Jones polynomial of a knot or a link in S^3, that is to compute an overlap of two states glued together to form a 3-sphere (which is the result of joining a 3-ball and another 3-ball turned inside out) we will use the following three rules:*The polynomial of a single unknotted ring is simply a number d,J([thick] [black] (0,0.0) circle (0.15cm); ) = d ;*The polynomial of a disjoint union of a knot (link)and an unknotted circle (that iscircle topologically disconnected from ) is given byJ([thick] [black] (0,0.0) circle (0.15cm); ∪)=d· J()that is the polynomial of unlinked components factorizes.*Finally, the knot (link) may be nontrivial, but the polynomial still can be computed by recursive application of the skein relation <cit.>. In particular, any crossing of the knot diagram can be “resolved” by replacing the original overlap (path integral) by a linear combination of two overlaps, in which that specific crossing is changed by two possible ways of connecting the lines without crossing. Specifically, the following is true for the knot diagrams: [baseline=0] [thick] (0.5,-0.1) – (0.3,-0.1) – (0.15,0); [thick] (-0.15,0.2) – (-0.3,0.3) – (-0.5,0.3); [thick] (-0.5,-0.1) – (-0.3,-0.1) – (0.3,0.3) – (0.5,0.3);=A[baseline=0] [thick] (-0.5,-0.1) – (-0.3,-0.1) arc (-90:90:0.2) – (-0.5,0.3); [thick] (0.5,-0.1) – (0.3,-0.1) arc (-90:-270:0.2) – (0.5,0.3);+ A^-1[baseline=0] [thick] (0.5,-0.1) – (-0.5,-0.1); [thick] (0.5,0.3) – (-0.5,0.3);A simple self-consistency check shows that parameters d and A must be related:d=-A^2 - A^-2 .Recursive resolution of all the crossings reduce the knot diagram to a linear combination of diagrams containing only unlinked circles and the remaining two rules apply. The above prescription produces a noncanonical normalization of Jones polynomials. In particular, the standard normalization of the Jones polynomial of the unknotted circle is unity. Polynomials computed above will always have an extra factor of d, as compared to the standard normalization. The prescription also distinguishes nonplanar isotopies of the knots, which produce powers of -A^3. This ambiguity results in an ambiguous phase of quantum states. In Chern-Simons theory it is known as the problem of framing <cit.>. To finalize the connection with Chern-Simons we define A in terms of the Chern-Simons data:A=exp(π i/2(k+2)). Now we are all set to discuss specific properties of quantum states in Chern-Simons theory. §.§ Topological bits and connectomes What is a qubit in the present Chern-Simons formulation? Note that for n=4 equation (<ref>) gives =2, so states in formula (<ref>) are states in a two-dimensional Hilbert space. Moreover they are linearly independent (with a caveat discussed below). One can use the rules of computing overlaps from the previous section to find⟨0̂|0̂⟩ =[ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) arc (-90:90:0.15cm) – (0,0.3);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9);(0,0) – (-0.4,0) arc (-90:-270:0.15cm) – (0,0.3);(0,0.6) – (-0.4,0.6) arc (-90:-270:0.15cm) – (0,0.9); ] =d^2 , ⟨0̂|1̂⟩ =[ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.1,0) arc (-90:90:0.45cm) – (0,0.9);(0,0.6) – (0.1,0.6) arc (90:-90:0.15cm) – (0,0.3);(0,0) – (-0.4,0) arc (-90:-270:0.15cm) – (0,0.3);(0,0.6) – (-0.4,0.6) arc (-90:-270:0.15cm) – (0,0.9); ] =d, ⟨1̂|1̂⟩ =[ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.1,0) arc (-90:90:0.45cm) – (0,0.9);(0,0.6) – (0.1,0.6) arc (90:-90:0.15cm) – (0,0.3);(0,0) – (-0.1,0) arc (-90:-270:0.45cm) – (0,0.9);(0,0.6) – (-0.1,0.6) arc (90:270:0.15cm) – (0,0.3); ] =d^2 .As stated we are not drawing the 3-manifold implying that it is a 3-sphere. So states |0̂⟩ and |1̂⟩ form a nonorthonormal basis in the Hilbert space. Using the skein relation one can shown that all other ways of connecting the punctures can be reduced to a linear combination of (<ref>).The orthonormal basis can be constructed using the Gram-Schmidt procedure, e.g.|0⟩ =1/d[ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) arc (-90:90:0.15cm) – (0,0.3);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); ] =1/d |0̂⟩ , |1⟩ =1/√(d^2-1)([ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.1,0) arc (-90:90:0.45cm) – (0,0.9);(0,0.6) – (0.1,0.6) arc (90:-90:0.15cm) – (0,0.3); ]-1/d[ 0.6[very thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) arc (-90:90:0.15cm) – (0,0.3);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); ]) =1/√(d^2-1)(|1̂⟩-1/d|0̂⟩).Note that the orthonormal basis does not have a pure diagrammatic presentation, it is rather given by a linear combination of diagrams, since there is no knot or link whose Jones polynomial vanishes for generic k. Note also that for d=1 (k=1) the norm of the second state is infinite, which means that |0̃⟩ and |1̃⟩ are actually linearly dependent. This kind of degeneracy does not happen for k>1.Since there is an explicit diagrammatic presentation for states |0̂⟩ and |1̂⟩, it is convenient to usethe nonorthogonal basis to discuss the topological version of quantum mechanics. A generic state in this quantum mechanics is given by a 3-manifold with a specific way of wiring this manifold with Wilson loops and Wilson lines connecting punctures on the boundary spheres. All overlaps in this basis are the Jones polynomials.One question that can be asked in the topological picture is whether entanglement equals topology, or rather whether topology can give a good classification of quantum entanglement. This question was addressed in the literature in different ways, e.g. <cit.>. In <cit.>, in particular, the present author was using thesetup introduced above to address the problem of the classification. It was shown that for the bipartite entanglement a classification equivalent to the well-known one by Stochastic Local Operations and Classical Communication (SLOCC) <cit.> can be obtained restricting the consideration to the states of simplest topology. Such states were called connectomes and one of the goals of this work is to further clarify the nature of such states.The simplest topologies discussed in <cit.> are those which mostly contain the information about which party is connected to which, and which connect the parties with a minimal amount of tangling, up to local permutations of punctures. In the bipartite or tripartite case such minimal connectivity can be expressed by planar graphs with two or three vertices respectively. For example for the pair of qubits one has the following three diagrams:[ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) arc (-90:90:0.15cm) – (0,0.3);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0) – (0.8,0) arc (-90:-270:0.15cm) – (1.2,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ] , [ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ] ,and[ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (1.2,0);(0,0.6) – (1.2,0.6); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); ] ,where we show all possible connections of left and right spheres (groups of four points) up to permutations within the individual spheres. The first two diagrams actually represent the same state since two lines are not enough to support the connectivity of left and right <cit.>. Consequently, there are only two independent diagrams, corresponding to separable and Bell class of entanglement. Later we will make a more precise definition of the connectome states observing that in the limit k→∞ all possible connections reduce to the simplest topologies, similar to (<ref>).We will also show that the entanglement entropy of the connectome states is given by simple formula, which in the limit of large k and large n reduce to simple counting of connections between the parties. This fact was used in <cit.> to derive connectome versions of the inequalities for the entanglement entropy. Let us consider the example of subadditivity.Subadditivity inequality states that the sum of entanglement entropies of subsystems A and B is greater than the entanglement entropy of the union A∪ B,S(A) + S(B)≥ S(A∪ B) .A simple connectome derivation can be obtained from the diagram in figure <ref>. Let ℓ_AB be the number of connections between subsystems A and B. Let N_A, N_B and N_AB be the number of connections of subsystems A, B and A∪ B, respectively, to the rest of the system. Clearly,N_A + N_B = N_A∪ B + 2ℓ_AB≥ N_A∪ B ,which is equivalent to the statement about the entropies of the connectome states in the limit k→∞, n→∞, modulo factor log 2, cf. equation (<ref>). Using the same counting one can derive many similar inequalities satisfied by entanglement entropies of another special class of quantum states – the holographic states <cit.>. To the author's knowledge the connectome states are a subclass of the holographic states, since they satisfy the same inequalities, although in some cases they saturate them.In the remaining part of this paper we will further explore the connection of the connectomes with the holographic states. § MINIMAL AREA FORMULA FOR ENTANGLEMENT ENTROPY Let us begin with a discussion of the entanglement entropy of Chern-Simons states with a pair of S^2 boundaries. §.§ Entanglement entropy Let us assume a bipartite state of the form [ [thick] [cyan] (-0.5,0.55) rectangle (0.2,1.2); [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [pink] (-0.2,0.55) rectangle (+0.5,1.2); [lime] (-0.2,-0.1) rectangle (+0.5,0.55);in 0.6,0.8,...,1.0 (0,) – (0.4,) arc (-90:90:0.05cm) – (0,+0.1);in 0.6,0.8,...,1.0 (,) – (-0.4,) arc (270:90:0.05cm) – (,+0.1);in 0,0.1,...,0.6 (0,) – (,);in 0,0.1,...,1.2[black] (0,) circle (0.04cm);in 0,0.1,...,1.2[black] (,) circle (0.04cm);(-0.25,0.25) node m;(-0.25,0.9) node n; ] ,that is a connectome state. For simplicity, the state is chosen to belong to a product of two equivalent Hilbert spaces ⊗. There are m connections between punctures in different subsystems and n connection between the punctures of the same subsystem. The density matrix of state (<ref>) is given by two copies of the above diagram. To construct the reduced density matrix of the left (right) subsystem one simply connects the corresponding punctures of the copies of the right (left) subsystem, as the following diagram shows, ρ̂_L= [ [thick] [cyan] (-0.5,0.55) rectangle (0.2,1.2); [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [pink] (-0.2,0.55) rectangle (+0.2,1.2); [lime] (-0.2,-0.1) rectangle (+0.2,0.55);in 0.6,0.8,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.05cm) – (0+,+0.1);in 0.6,0.8,...,1.0 (+,) – (+-0.4,) arc (270:90:0.05cm) – (+,+0.1);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,1.2[black] (0+,) circle (0.04cm);in 0,0.1,...,1.2[black] (+,) circle (0.04cm); [cyan] (+-0.2,0.55) rectangle (++0.5,1.2); [yellow] (+-0.2,-0.1) rectangle (++0.5,0.55);in 0.6,0.8,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.05cm) – (0+,+0.1);in 0.6,0.8,...,1.0 (+,) – (+-0.4,) arc (270:90:0.05cm) – (+,+0.1);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,1.2[black] (+,) circle (0.04cm); ] =d^n/2[ [thick] [cyan] (-0.5,0.55) rectangle (0.2,1.2); [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [cyan] (-0.2,0.55) rectangle (+0.5,1.2); [yellow] (-0.2,-0.1) rectangle (+0.5,0.55);in 0.6,0.8,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.05cm) – (0+,+0.1);in 0.6,0.8,...,1.0 (+,) – (+-0.4,) arc (270:90:0.05cm) – (+,+0.1);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,1.2[black] (0+,) circle (0.04cm);in 0,0.1,...,1.2[black] (+,) circle (0.04cm); ] .This is the unnormalized density matrix of the left subsystem. Here we used the rule that every closed line can be substituted by numerical factor d.In order to compute the trace of ρ̂_L one should connect the inputs with outputs. Since we consider these lines in the three-dimensional bulk filling the space between two S^2, closing produces a copy of S^2× S^1 with m lines winding S^1 direction and n/2 contractible circles. The contractible circles can again be replaced by d^n/2 so the trace is given by m non-trivial loops in S^2× S^1.[For three-dimensional spaces with topology other than S^3 a slight modification of rules from section <ref> is necessary. We will only state the results.] The corresponding path integral is just D_m– the dimension of the Hilbert space of S^2 with m punctures <cit.>. Hence, ρ_L=1/D_m d^n/2[ [thick] [cyan] (-0.5,0.55) rectangle (0.2,1.2); [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [cyan] (-0.2,0.55) rectangle (+0.5,1.2); [yellow] (-0.2,-0.1) rectangle (+0.5,0.55);in 0.6,0.8,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.05cm) – (0+,+0.1);in 0.6,0.8,...,1.0 (+,) – (+-0.4,) arc (270:90:0.05cm) – (+,+0.1);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,1.2[black] (0+,) circle (0.04cm);in 0,0.1,...,1.2[black] (+,) circle (0.04cm);(-0.25,0.25) node m;(-0.25,0.9) node n; ] , andρ^k_L=1/D_m^k d^n/2[ [thick] [cyan] (-0.5,0.55) rectangle (0.2,1.2); [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [cyan] (-0.2,0.55) rectangle (+0.5,1.2); [yellow] (-0.2,-0.1) rectangle (+0.5,0.55);in 0.6,0.8,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.05cm) – (0+,+0.1);in 0.6,0.8,...,1.0 (+,) – (+-0.4,) arc (270:90:0.05cm) – (+,+0.1);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,1.2[black] (0+,) circle (0.04cm);in 0,0.1,...,1.2[black] (+,) circle (0.04cm);(-0.25,0.25) node m;(-0.25,0.9) node n; ] .The only thing that distinguishes ρ_L^k from ρ_L is the power of D_m. Then ρ_L^k=1/D_m^k d^n/2 [ [thick] [cyan] (-0.5,0.55) rectangle (0.2,1.2); [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [cyan] (-0.2,0.55) rectangle (+0.5,1.2); [yellow] (-0.2,-0.1) rectangle (+0.5,0.55);in 0.6,0.8,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.05cm) – (0+,+0.1);in 0.6,0.8,...,1.0 (+,) – (+-0.4,) arc (270:90:0.05cm) – (+,+0.1);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,1.2[black] (0+,) circle (0.04cm);in 0,0.1,...,1.2[black] (+,) circle (0.04cm);(-0.25,0.25) node m;(-0.25,0.9) node n; ] =1/D_m^k [ [thick] [yellow] (-0.5,-0.1) rectangle (0.2,0.55); [yellow] (-0.2,-0.1) rectangle (+0.5,0.55);in 0,0.1,...,0.6 (0+,) – (+,);in 0,0.1,...,0.6[black] (0+,) circle (0.04cm);in 0,0.1,...,0.6[black] (+,) circle (0.04cm);(-0.25,0.25) node m; ] =1/D^k-1_m .Computing the entropy yieldsS=log D_m .In other words, the entanglement entropy is defined by the connected part of the diagram, and more precisely, by the dimension of the subspace of the Hilbert space associated with the punctures connected to the other subsystem.Note that we would obtain the same result for any state that can be obtained from (<ref>) by local unitary transformations, for example permutation of the lines by the left or right action of unitary braiding operators. This follows from the invariance of the definition of the von Neumann entropy under local unitary transformations.One can view this result in the following way. The diagrams of the reduced density matrix in formula (<ref>) illustrate an example of two spheres connected by lines. One can cut the bulk separating the two spheres by placing another surface of spherical topology between the two (by enclosing one of the spheres). This surface will have a certain number of intersections with the lines. It is possible to choose the surface in such a way that it will be crossed by the minimal number of lines. The entanglement entropy is defined by this minimal number. In the classical limit of SU(2)_k Chern-Simons theory, k→∞, and large m≪ k, dimension D_m of the Hilbert space of the sphere with m punctures is given approximately byD_m≃C_m/2 ≃ 4^m/2/(m/2)^3/2√(π) ,so that the entanglement entropy is approximately mlog 2.The above result for the entanglement entropy is an analog of the Ryu-Takayanagi formula for the holographic states <cit.>. The formula claims that for theories with a gravity dual the entanglement entropy of a d-dimensional region A is given by the area of a surface γ_A that is homological to A in the (d+1)-dimensional bulk space and that has minimal area,S=min_γ_A Area[γ_A]/4G ,where G is the Newton's constant. The Ryu-Takayanagi formula is valid in the classical limit of the holographic correspondence, that is when the areas are very large in Planck units, G→ 0. If one thinks of Wilson lines as flux lines carrying a unit of flux, then the entropy is indeed proportional to the area divided by a typical area per flux, which is of the order of ħ^2.The discrete version of the Ryu-Takayanagi formula appear in the tensor network constructions of holography <cit.>. Chern-Simons theory is of course a special example of a tensor network. Moreover, the Chern-Simons example is a version of the bit thread approach to the calculation of the entanglement entropy discussed in <cit.>. The example considered here points to a special class of states for which the Ryu-Takayanagi formula is valid – these are the connectome states. Let us discuss examples of other states and see how the minimal area formula is modified.§.§ Corrections to the minimal area formula For general quantum states the entanglement entropy is not calculated by the above minimal area formula. In particular, the formula is not valid for linear combinations of the connectome states. As an example we can consider state[ [thick](0,0) – (1.8,0);(0,0.3) – (1.8,0.3);(0,0.6) – (0.5,0.6) arc (-90:30:0.15cm);(0,0.9) – (0.5,0.9) arc (90:60:0.15cm);(0.9,0.6) – (0.7,0.6) arc (-90:-120:0.15cm);(0.9,0.6) – (1.1,0.6) arc (-90:-60:0.15cm);(0.9,0.9) – (0.7,0.9) arc (90:210:0.15cm);(0.9,0.9) – (1.1,0.9) arc (90:-30:0.15cm);(1.2+,0.9) – (0.7+,0.9) arc (90:120:0.15cm);(1.2+,0.6) – (0.7+,0.6) arc (-90:-210:0.15cm); [black] (1.2+,0.0) circle (0.05cm); [black] (1.2+,0.3) circle (0.05cm); [black] (1.2+,0.6) circle (0.05cm); [black] (1.2+,0.9) circle (0.05cm); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); ] =(-A^6-A^-6)[ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ]+(1-A^-4)(1-A^4) [ [thick](0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ]=(A^4+A^-4)^2|00⟩ + (1-A^-4)(1-A^4)|11⟩ .The entanglement entropy of this state is given byS=- |s|^2/|s|^2+|c|^2log|s|^2/|s|^2+|c|^2 - |c|^2/|s|^2+|c|^2log|s|^2/|s|^2+|c|^2 ,where|c|=4 cos^2(2π/k+2) ,|s|= 4 sin^2(π/k+2) .The entropy as a function of k is shown in figure <ref> (saturated blue). Except for k=0 and k=4 the entropy is lower than the maximal value log 2. It vanishes in the k→∞ limit. The minimal area law gives an upper bound and the lower bound on the entropy, since the state is a linear combination of two states with entropies defined by the minimal area law. We note that it is not even clear how to apply the minimal area law due to the presence of crossings in the diagram. Another useful and connected example is the state[ [thick] [xscale=0.5] (1.2,0.45) arc (270:90:0.3); [line width=0.1cm,white] (0,0.6) – (1.2,0.6); [line width=0.1cm,white] (0,0.9) – (1.2,0.9);(0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [line width=0.1cm,white,xscale=0.5] (1.2,0.45) arc (-90:80:0.3); [xscale=0.5] (1.2,0.45) arc (-90:90:0.3); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ] =(1-A^-4)(1-A^4) [ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ]+ (-A^6-A^-6) [ [thick](0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ]=(-A^2-A^-2)|00⟩ + (-A^6-A^-6) |11⟩.Note that the expansion in terms of the connectome diagrams in the first line gives the swapped coefficients as compared to (<ref>). This is the consequence of the fact that the original diagrams are related by a non-local permutation of the upper pairs of points and serves as an illustration of convenient features of the topological presentation. For (<ref>) the entanglement entropy is given by equation (<ref>) with|s|=2 cos(π/k+2) ,|c|=2 cos(3π/k+2) .This entropy is shown in figure <ref> (saturated orange). It is zero for k=4 and has maximal value log 2 for k=2 and k→∞.It is straightforward to generalize the above two examples to the case of states with multiple loops, obtained by concatenation of the diagrams (<ref>) and (<ref>), that is ([ [thick](0,0) – (1.8,0);(0,0.3) – (1.8,0.3);(0,0.6) – (0.5,0.6) arc (-90:30:0.15cm);(0,0.9) – (0.5,0.9) arc (90:60:0.15cm);(0.9,0.6) – (0.7,0.6) arc (-90:-120:0.15cm);(0.9,0.6) – (1.1,0.6) arc (-90:-60:0.15cm);(0.9,0.9) – (0.7,0.9) arc (90:210:0.15cm);(0.9,0.9) – (1.1,0.9) arc (90:-30:0.15cm);(1.2+,0.9) – (0.7+,0.9) arc (90:120:0.15cm);(1.2+,0.6) – (0.7+,0.6) arc (-90:-210:0.15cm); [black] (1.2+,0.0) circle (0.05cm); [black] (1.2+,0.3) circle (0.05cm); [black] (1.2+,0.6) circle (0.05cm); [black] (1.2+,0.9) circle (0.05cm); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); ])^n and([ [thick] [xscale=0.5] (1.2,0.45) arc (270:90:0.3); [line width=0.1cm,white] (0,0.6) – (1.2,0.6); [line width=0.1cm,white] (0,0.9) – (1.2,0.9);(0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [line width=0.1cm,white,xscale=0.5] (1.2,0.45) arc (-90:80:0.3); [xscale=0.5] (1.2,0.45) arc (-90:90:0.3); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ])^n .With the increase of the number of loops, or equivalently, with the increase of the complexity of the diagram, the entanglement between the parties decreases, as evidenced by the shaded curves in figure <ref>. The presence of many loops decreases the connectivity of the space, making it disconnected eventually, and therefore unentangled. A qualitatively different class of states that appear in Chern-Simons theory features distinct three-dimensional topologies. Even if we fix the boundaries to be S^2, the bulk is not necessarily globally S^3. Let us consider an example with lines in S^2× S^1 with a pair of S^2 boundaries. [ [thick,xscale=cos(70)] [double distance=1.4mm,opacity=0.5] (0:1) arc (-90:-270:0.3); [rounded corners = 2] (-2,0.15) – (-1.1,0.15) – (-1.1,0.35) – (3.1,0.35) – (3.1,0.15) – (4,0.15);(-2,0.45) – (4,0.45);in 0,0.1,...,0.2 (0,0.15+) – (2,0.15+); [double distance=1.4mm,opacity=0.7] (0:1) arc (-90:90:0.3); [rounded corners = 2] (2,0.15) – (2.5,0.15) – (2.5,-0.15) – (-0.5,-0.15) – (-0.5,0.15) – (0,0.15); [rounded corners = 2] (2,0.25) – (2.8,0.25) – (2.8,-0.25) – (-0.8,-0.25) – (-0.8,0.25) – (0,0.25); [rounded corners = 2] (-2,-0.15) – (-1.1,-0.15) – (-1.1,-0.35) – (3.1,-0.35) – (3.1,-0.15) – (4,-0.15);(-2,-0.45) – (4,-0.45);in 0,0.3,...,1.2[black] (-2,-0.45+) ellipse (0.15cm and 0.05cm);in 0,0.3,...,1.2[black] (4,-0.45+) ellipse (0.15cm and 0.05cm); ] =|00⟩+ 1/(d^2-1)|11⟩ =d^2-2/d(d^2-1)[ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ]+1/(d^2-1)[ [thick](0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ] .Here the torus represents the defect region, where the space differs from S^3. For the space to be indeed S^2× S^1, the interior of the torus should be contractible along what appears as a non-contractible cycle and non-contractible along the contractible cycle. In other words one should fill the interior with a solid torus whose fundamental cycles are exchanged. This is how S^2× S^1 is obtained from S^3 via surgery operation <cit.>. The entropy of state (<ref>) is shown in figure (saturated blue). It is only maximal for k=0 and k=2. For k→∞ the entropy asymptotes toS=log10/3^9/5 ≃ 0.325 . Let us consider one more example [ [thick] [double distance=3mm,opacity=0.5,yscale=cos(50)] (0:1) arc (180:0:0.4); [rounded corners=2] (0.8,0.15) – (1.25,0.15) – (1.25,-0.15) – (0.7,-0.15) – (0.7,0.15) – (0.8,0.15); [rounded corners=2] (2.0,0.15) – (1.55,0.15) – (1.55,-0.15) – (2.1,-0.15) – (2.1,0.15) – (2.0,0.15); [rounded corners = 2] (0.2,0.) – (0.6,0.) – (0.6,-0.3) – (1.35,-0.3) – (1.35,0.3) – (0.2,0.3); [rounded corners = 2] (2.6,0.) – (2.2,0.) – (2.2,-0.3) – (1.45,-0.3) – (1.45,0.3) – (2.6,0.3); [double distance=3mm,opacity=0.7,yscale=cos(50)] (0:1) arc (-180:0:0.4); [rounded corners = 2] (0.2,-0.3) – (0.5,-0.3) – (0.5,-0.5) – (2.3,-0.5) – (2.3,-0.3) – (2.6,-0.3);(0.2,-0.6) – (2.6,-0.6);in 0,0.3,...,1.2[black] (0.2,-0.6+) circle (0.05cm);in 0,0.3,...,1.2[black] (2.6,-0.6+) circle (0.05cm); ] = 1/(d^2-1)[ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ]+ d^2-2/d(d^2-1)[ [thick](0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ] =2/d |00⟩+ d^2-2/d(d^2-1)|11⟩ ,where we applied the same trick with reshuffling of the lines in order to use the available coefficients of equation (<ref>). The entropy of the last state is shown in figure <ref> (saturated orange). It has zero value at k=1,2 and the maximum value for k=0 and for non-integer k≃ 1.3. For k→∞ it approaches the value given by equation (<ref>). As for states (<ref>), presence of multiple three-dimensional defects decreases the space connectivity and, consequently, the entropy, as illustrated by the family of shaded curves in figure <ref>. § CLASSICAL LIMIT The limit k→∞ is the classical limit of Chern-Simons theory. In this limitA→ 1 (q→ 1) and braiding becomes a simple permutation, [ [baseline=0] [thick,rounded corners=2] (0.5,-0.1) – (0.3,-0.1) – (0.15,0); [thick,rounded corners=2] (-0.15,0.2) – (-0.3,0.3) – (-0.5,0.3); [thick,rounded corners=2] (-0.5,-0.1) – (-0.3,-0.1) – (0.3,0.3) – (0.5,0.3); ] =[ [baseline=0] [thick] (-0.5,-0.1) – (-0.3,-0.1) arc (-90:90:0.2) – (-0.5,0.3); [thick] (0.5,-0.1) – (0.3,-0.1) arc (-90:-270:0.2) – (0.5,0.3); ] + [ [baseline=0] [thick] (0.5,-0.1) – (-0.5,-0.1); [thick] (0.5,0.3) – (-0.5,0.3); ] , hence[ [baseline=0] [thick,rounded corners=2] (0.5,-0.1) – (0.3,-0.1) – (0.15,0); [thick,rounded corners=2] (-0.15,0.2) – (-0.3,0.3) – (-0.5,0.3); [thick,rounded corners=2] (-0.5,-0.1) – (-0.3,-0.1) – (0.3,0.3) – (0.5,0.3); ] =[ [baseline=0,xscale=-1] [thick,rounded corners=2] (0.5,-0.1) – (0.3,-0.1) – (0.15,0); [thick,rounded corners=2] (-0.15,0.2) – (-0.3,0.3) – (-0.5,0.3); [thick,rounded corners=2] (-0.5,-0.1) – (-0.3,-0.1) – (0.3,0.3) – (0.5,0.3); ] .(To prove this just rotate 90 degrees all the diagrams in the skein relation.) The latter property means that lines can pass through each other and knots become trivial in the classical limit. Consequently, the states represented by diagrams, like (<ref>) and (<ref>), must all reduce to states with trivial topological linking, that is connectome states. In the above examples, [ [thick](0,0) – (1.8,0);(0,0.3) – (1.8,0.3);(0,0.6) – (0.5,0.6) arc (-90:30:0.15cm);(0,0.9) – (0.5,0.9) arc (90:60:0.15cm);(0.9,0.6) – (0.7,0.6) arc (-90:-120:0.15cm);(0.9,0.6) – (1.1,0.6) arc (-90:-60:0.15cm);(0.9,0.9) – (0.7,0.9) arc (90:210:0.15cm);(0.9,0.9) – (1.1,0.9) arc (90:-30:0.15cm);(1.2+,0.9) – (0.7+,0.9) arc (90:120:0.15cm);(1.2+,0.6) – (0.7+,0.6) arc (-90:-210:0.15cm); [black] (1.2+,0.0) circle (0.05cm); [black] (1.2+,0.3) circle (0.05cm); [black] (1.2+,0.6) circle (0.05cm); [black] (1.2+,0.9) circle (0.05cm); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); ] → [ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ] , and[ [thick] [xscale=0.5] (1.2,0.45) arc (270:90:0.3); [line width=0.1cm,white] (0,0.6) – (1.2,0.6); [line width=0.1cm,white] (0,0.9) – (1.2,0.9);(0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [line width=0.1cm,white,xscale=0.5] (1.2,0.45) arc (-90:80:0.3); [xscale=0.5] (1.2,0.45) arc (-90:90:0.3); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ] → [ [thick](0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ] ,which explains the asymptotic behavior of the entanglement entropy in figure <ref>. Consequently, classical limit of Chern-Simons gives precise definition of the connectome states as the classical states in the global S^3 topology up to local permutations of the endpoints on the boundaries. In quantum theory connectomes label equivalence classes of states that have the same classical limit.Since in the classical limit all the states become connectomes the entanglement entropyis given by the minimal area law for all of them, as long as we consider the situation of bipartition by two boundary spheres.[More general situations will be discussed in the next section.] Away from the classical regime the minimal area law must be corrected. Since all the observables in Chern-Simons theory are computable, one can study the corrections at any order. For state (<ref>) the leading corrections to the entropy are given byS=π^4/k^4(1-logπ^4/k^4) + O(k^-5) .As we have already seen, the entropy decays in the classical limit, since the asymptotic state is separable.Similarly, for state (<ref>) the asymptotic expansion of the entropy is S=log 2 - 8π^4/k^4 +O(k^-5) ,and the state is asymptotically maximally entangled.Comparing these findings with the AdS/CFT correspondence we conclude that the connectome states are analogous to the classical gravity states. In the classical limit all Chern-Simons states that have diagrammatic presentation reduce to connectome states. Similarly, the holographic CFT states are those that have classical geometry interpretation in the classical limit. In the same spirit we can say that connectomes correspond to classical topologies. In both cases the entanglement entropy satisfies a minimal area law formula.Unlike classical states quantum states are generally linear combinations of classical states. In particular, generic quantum states do not have a classical geometric, or topological interpretation. However there are instances of quantum states that do have classical limit different from that of connectome states. These are the states with nontrivial global three-dimensional topology as states (<ref>) and (<ref>).In the classical limit states (<ref>) and (<ref>) both reduce to the same state|00⟩ + 1/3|11⟩ ,which is not a connectome state and has a non-minimal-area entropy (<ref>). Yet these states belong to the same Hilbert space as states (<ref>) and (<ref>) and can be expressed as linear combinations of two connectomes. In other words, three-dimensional defects in topology introduce different classes of classical solutions.§ WORMHOLES In this section we will give further interpretation of the states with nontrivial three-dimensional topology. As we have seen, it is not very clear whether such states respect some form of the minimal area law (RT formula) for the entropy. Neither they reduce to the connectome states in the classical limit. We will explain how such states can be viewed as “nonperturbative”, compared to the “perturbative” connectome states. The violation of the minimal area law is not very surprising – similar problem occurs in the tensor network constructions <cit.>– but problematic, because we would like to use this law also for states with more than two S^2 boundaries. For instance, if we consider the case of three S^2, each with four punctures, and consider the following state, [ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.2) circle (0.05cm); [black] (0,0.4) circle (0.05cm); [black] (0,0.6) circle (0.05cm);(0,0) – (1.8,0.0); [black] (1.8,0.0) circle (0.05cm); [black] (1.8,0.2) circle (0.05cm); [black] (1.8,0.4) circle (0.05cm); [black] (1.8,0.6) circle (0.05cm);(0,0.2) – (1.8,0.2); [black] (0.6,1.2) circle (0.05cm); [black] (0.8,1.2) circle (0.05cm); [black] (1.,1.2) circle (0.05cm); [black] (1.2,1.2) circle (0.05cm);(0.6,1.2) – (0.6,1.0) arc (0:-90:0.4) – (0.,0.6);(0.8,1.2) – (0.8,1.0) arc (0:-90:0.6) – (0.,0.4);(1.0,1.2) – (1.0,1.0) arc (180:270:0.6) – (1.8,0.4);(1.2,1.2) – (1.2,1.0) arc (180:270:0.4) – (1.8,0.6); ] =|000⟩ + 1/√(d^2 - 1) |111⟩ ,the reduced density matrix of any of the three spheres will be equivalent to the state (<ref>). In other words, it will contain a defect, so that the entanglement entropy will not be given by the minimal area law. We will show how the minimal area law is recovered if we “patch” the holes.We first note that three-dimensional defects can be constructed via a surgery operation on a trivial topology (e.g. <cit.>). For example, S^2× S^1 can be obtained by cutting a solid torus out of S^3, twisting the torus by exchanging its two fundamental cycles (S modular transformation) and gluing the twisted torus with the remaining part of S^3, which in this case is also a solid torus. After such an operation, a noncontractible S^1 cycle appears in the resulting space in the place of the contractible cycle of the original torus. This is what diagrams in equations (<ref>) and (<ref>) are intended to show.Note that from the TQFT point of view, gluing to tori along a common boundary produces an overlap ⟨ v_1|v_2⟩ of two vectors |v_1⟩ and |v_2⟩, each represented by a solid torus. If S is an operator that exchanges the two fundamental cycles, then the result of the surgery is ⟨ v_1|S|v_2⟩. SU(2) Chern-Simons states with T^2 boundary have a special basis, labeled by integrable representations of su(2)_k Kac-Moody algebra, which can be counted by Dynkin labels R=0,…, k <cit.>. Such basis states are solid tori with a closed Wilson loop around the noncontractible cycle, colored with a given representation R. Matrix elements S_R_1R_2=⟨ R_1|S|R_2⟩ in this basis are computed by the S^3 invariants ofHopf links with components colored by representations R_1 and R_2,[Here, again, we use a slightly different normalization from the standard one, to be consistent with the conventions chosen for the invariants. In the present normalization, the components S_0R are the quantum dimensions of the representations, up to a sign, but matrix S is not idempotent. ][ [scale=0.7] [line width=2.0,pink] (0,0) arc [x radius=1, y radius=0.3, start angle=-260, end angle=80]; [line width=2.0,cyan] (0.15,-0.45) arc [x radius=1, y radius=1, start angle=-20, end angle=320]; ] =(-1)^R_1+R_2A^2(R_1+1)(R_2+1)-A^-2(R_1+1)(R_2+1)/A^2-A^-2 =(-1)^R_1+R_2sin(π(R_1+1)(R_2+1)/k+2)/sin(π/k+2) .Now, for state (<ref>), for example, we can write [ [thick,xscale=cos(70)] [double distance=1.4mm,opacity=0.5] (0:1) arc (-90:-270:0.3); [rounded corners = 2] (-2,0.15) – (-1.1,0.15) – (-1.1,0.35) – (3.1,0.35) – (3.1,0.15) – (4,0.15);(-2,0.45) – (4,0.45);in 0,0.1,...,0.2 (0,0.15+) – (2,0.15+); [double distance=1.4mm,opacity=0.7] (0:1) arc (-90:90:0.3); [rounded corners = 2] (2,0.15) – (2.5,0.15) – (2.5,-0.15) – (-0.5,-0.15) – (-0.5,0.15) – (0,0.15); [rounded corners = 2] (2,0.25) – (2.8,0.25) – (2.8,-0.25) – (-0.8,-0.25) – (-0.8,0.25) – (0,0.25); [rounded corners = 2] (-2,-0.15) – (-1.1,-0.15) – (-1.1,-0.35) – (3.1,-0.35) – (3.1,-0.15) – (4,-0.15);(-2,-0.45) – (4,-0.45);in 0,0.3,...,1.2[black] (-2,-0.45+) ellipse (0.15cm and 0.05cm);in 0,0.3,...,1.2[black] (4,-0.45+) ellipse (0.15cm and 0.05cm); ] =∑_R=0^k S_0R^-1[ [thick,xscale=cos(70)] [line width=0.7mm,opacity=0.2] (1,0.05) arc (-130:-220:0.35); [rounded corners = 2] (-2,0.15) – (-1.1,0.15) – (-1.1,0.35) – (3.1,0.35) – (3.1,0.15) – (4,0.15);(-2,0.45) – (4,0.45);in 0,0.1,...,0.2 (0,0.15+) – (2,0.15+); [line width=0.7mm,gray] (1,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (1,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (1.07,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (0.93,0.05) arc (-130:150:0.35); [rounded corners = 2] (2,0.15) – (2.5,0.15) – (2.5,-0.15) – (-0.5,-0.15) – (-0.5,0.15) – (0,0.15); [rounded corners = 2] (2,0.25) – (2.8,0.25) – (2.8,-0.25) – (-0.8,-0.25) – (-0.8,0.25) – (0,0.25); [rounded corners = 2] (-2,-0.15) – (-1.1,-0.15) – (-1.1,-0.35) – (3.1,-0.35) – (3.1,-0.15) – (4,-0.15);(-2,-0.45) – (4,-0.45);in 0,0.3,...,1.2[black] (-2,-0.45+) ellipse (0.15cm and 0.05cm);in 0,0.3,...,1.2[black] (4,-0.45+) ellipse (0.15cm and 0.05cm); [opacity=0.8] (0.4,0) node R; ] .In the right hand side, the nontrivial 3D topology was replaced by a sum of simpler 3D topologies, featuring 1D defects (loops) colored by representations R. Note that any representation R=n>1 can be obtained from n copies of R=1 projecting out R<n irreps from the tensor product. Diagrammatically this is achieved via the Jones-Wenzl projector (symmetrizer) <cit.>, which is a linear combination of elements of the Temperley-Lieb algebra of order n. To summarize the result of this operation without technical details we say that state (<ref>) is a linear combination of many diagrams in the trivial three-dimensional topology. In the classical limit k→∞, the number of diagram creating the defect is very large. If this limit is taken naively, one could expect that (<ref>) becomes ∑_R=0^k S_0R^-1[ [thick,xscale=cos(70)] [line width=0.7mm,opacity=0.2] (1,0.05) arc (-130:-220:0.35); [rounded corners = 2] (-2,0.15) – (-1.1,0.15) – (-1.1,0.35) – (3.1,0.35) – (3.1,0.15) – (4,0.15);(-2,0.45) – (4,0.45);in 0,0.1,...,0.2 (0,0.15+) – (2,0.15+); [line width=0.7mm,gray] (1,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (1,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (1.07,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (0.93,0.05) arc (-130:150:0.35); [rounded corners = 2] (2,0.15) – (2.5,0.15) – (2.5,-0.15) – (-0.5,-0.15) – (-0.5,0.15) – (0,0.15); [rounded corners = 2] (2,0.25) – (2.8,0.25) – (2.8,-0.25) – (-0.8,-0.25) – (-0.8,0.25) – (0,0.25); [rounded corners = 2] (-2,-0.15) – (-1.1,-0.15) – (-1.1,-0.35) – (3.1,-0.35) – (3.1,-0.15) – (4,-0.15);(-2,-0.45) – (4,-0.45);in 0,0.3,...,1.2[black] (-2,-0.45+) ellipse (0.15cm and 0.05cm);in 0,0.3,...,1.2[black] (4,-0.45+) ellipse (0.15cm and 0.05cm); [opacity=0.8] (0.4,0) node R; ] → (∑_R=0^∞ S_0R^-1[ [thick,xscale=cos(70)] [line width=0.7mm,opacity=0.6] (1,0.05) arc (-130:-220:0.35); [line width=0.7mm,gray] (1,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (1,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (1.07,0.05) arc (-130:150:0.35); [line width=0.02mm,opacity=0.6] (0.93,0.05) arc (-130:150:0.35); [opacity=0.8] (0.4,0) node R; ]) [ [thick,xscale=cos(70)] [rounded corners = 2] (-2,0.15) – (-1.1,0.15) – (-1.1,0.35) – (3.1,0.35) – (3.1,0.15) – (4,0.15);(-2,0.45) – (4,0.45);in 0,0.1,...,0.2 (0,0.15+) – (2,0.15+); [rounded corners = 2] (2,0.15) – (2.5,0.15) – (2.5,-0.15) – (-0.5,-0.15) – (-0.5,0.15) – (0,0.15); [rounded corners = 2] (2,0.25) – (2.8,0.25) – (2.8,-0.25) – (-0.8,-0.25) – (-0.8,0.25) – (0,0.25); [rounded corners = 2] (-2,-0.15) – (-1.1,-0.15) – (-1.1,-0.35) – (3.1,-0.35) – (3.1,-0.15) – (4,-0.15);(-2,-0.45) – (4,-0.45);in 0,0.3,...,1.2[black] (-2,-0.45+) ellipse (0.15cm and 0.05cm);in 0,0.3,...,1.2[black] (4,-0.45+) ellipse (0.15cm and 0.05cm); ] ∼ |00⟩ + |11⟩ .In other words, one could expect that in the classical limit the loops can be removed and one obtains a maximally entangled state. This is not quite the case, however. This operation does not simply commute with the summation in the formula. It is easy to check that for any finite k the state is the one given by (<ref>), that is, it is not maximally entangled.From the point of view of classical limit, Wilson loops are quantum fluctuations of the topology. Linear combination of an infinite number of diagrams, with many loops, creates a nonperturbative effect that persists in the classical limit in the form of a defect in the global topology. Let us discuss how the minimal area law works for these wormhole-like topologies. Let us consider a pair of two-spheres in S^2× S^3 connected by n+m lines, such n and m lines follow complementary segments of S^1, as the following diagram illustrates: [ [thick] [line width=0.3,yscale=cos(50)] (1.1,0) arc (180:0:1.5); [line width=0.3,yscale=cos(50)] (1,0) arc (180:0:1.6); [line width=0.3,yscale=cos(50)] (0.9,0) arc (180:0:1.7); [line width=0.3,yscale=cos(50)] (0.8,0) arc (180:0:1.8); [gray] (1,0) circle (0.35);[gray] (4.2,0) circle (0.35);[black] (0.8,0) circle (0.05);[black] (0.9,0) circle (0.05);[black] (1.0,0) circle (0.05);[black] (1.1,0) circle (0.05);[black] (1.2,0) circle (0.05);[black] (4.4,0) circle (0.05);[black] (4.3,0) circle (0.05);[black] (4.2,0) circle (0.05);[black] (4.1,0) circle (0.05);[black] (4.0,0) circle (0.05);[line width=0.3,yscale=cos(50)] (0.8,0) arc (-180:0:1.8);[line width=0.3,yscale=cos(50)] (0.9,0) arc (-180:0:1.7);[line width=0.3,yscale=cos(50)] (1.,0) arc (-180:0:1.6);[line width=0.3,yscale=cos(50)] (1.1,0) arc (-180:0:1.5);[line width=0.3,yscale=cos(50)] (1.2,0) arc (-180:0:1.4); [double distance=12mm,opacity=0.5,yscale=cos(50)] (0:1) arc (180:0:1.6); [double distance=12mm,opacity=0.5,yscale=cos(50)] (0:1) arc (-180:0:1.6); ] Let us calculate the entanglement entropy corresponding to one of the spheres. There are two candidates for the minimal surface: one encircles the sphere and is crossed by n+m lines; the other is a surface consistent of two disconnected S^2, one pierced with m and the other with n lines. The first surface (sphere) corresponds to a Hilbert space of dimension C_n+m/2, while the other to a product of two Hilbert spaces with total dimension C_n/2· C_m/2. The dimension of the smaller Hilbert space defines the rank of the density matrix and consequently, the entanglement entropy. For finite m and n one hasC_n+m/2 ≥ C_n/2· C_m/2 ,so the entanglement entropy, of one of the spheres in state (<ref>) is log C_n/2· C_m/2. Indeed this is state is nothing but a projector of the Hilbert space of a sphere with n+m punctures onto a subspace isomorphic to a product of two Hilbert spaces of spheres with n punctures and m punctures respectively. In the limit of very large n and m, the leading order asymptotic of the Catalan number is C_n/2∼ nlog 2, so that (<ref>) gets saturated and the entropy calculation is consistent with counting the number of lines crossing the minimal surface, which is m+n for both the connected and the disconnected surfaces. Essentially, for large number of degrees of freedom n and m, the presence of the defect becomes unimportant, and this limit can be compared with the limit of low curvature of the holographic states.Let us now review how the minimal area formula is recovered in the example of statessimilar to (<ref>). In comparison to state (<ref>) the analysis is complicated by the presence of additional Wilson loop winding the defect. For an arbitrary number ℓ of such Wilson loops the states would beC_ℓ/2|00⟩ + C_ℓ/2^(1)/√(d^2-1)|11⟩ ,where C_ℓ/2^(j) counts the multiplicity of the spin j representation of SU(2) in the tensor product of ℓ fundamental irreps,C_ℓ/2^(j) =(2j+1)ℓ!/(ℓ/2-j)!(ℓ/2+j+1)! .Again, this counting is valid in the regime of large k. Note that when ℓ is also large, one hasC_ℓ/2^(j) ∼ (2j+1)C_ℓ/2 , ℓ ≫ 1 .Consequently, state (<ref>) becomes maximally entangled in the regime k≫ℓ≫ 1, C_ℓ/2|00⟩ + C_ℓ/2^(1)/√(d^2-1)|11⟩ →C_ℓ/2(|00⟩ + |11⟩). The presence of the large number of Wilson loops winding the defect restores the minimal area law in the sense discussed in this work, since in this case the minimal number of Wilson lines a surface separating two parts of the system can cross is four. This result can be generalized to any number of Wilson lines connecting the two parts. In particular, it solves the problem mentioned in the beginning of this section: as long as the number of Wilson lines is large the naive minimal area law formula counting the number of Wilson lines (connections between the parties) is valid. In particular, this implies that connectome states for large k and large number of Wilson lines satisfy the inequalities for the entanglement entropy, satisfied by the holographic states, as discussed in <cit.>.§ REPLICA WORMHOLES AND ORDINARY REPLICAS In this section we use the analogy between the connectome and holographic states to revisit the problem of the replica wormholes appearing in the discussion of the black hole information paradox. We will try to construct analogs of replica wormholes in terms of the Chern-Simons theory, or a more general TQFT, and observe that such objects are not unique to gravity theories.In the gravitational context replica wormholes appear after one chooses the following prescription for the calculation of the path integral: fix boundary conditions and sum over all geometries and topologies <cit.>. When applied to a replicated manifold in the replica method this prescription might be understood as a requirement of including all possible topologies in consideration, even those that are not factorizable. One way the problem may be stated heuristically is ρ^n≠(ρ)^n, where ρ is the density matrix, or path integral being replicated.Since the problem appears in the part of the calculation that deals with the sum over topologies, one might expect to see an analog of this problem in a Chern-Simons calculation. We will not, however, observe a necessity to sum over all possible topologies in Chern-Simons path integral. Since all the calculations can be performed exactly, the correct answers are obtained without including all replica wormholes, but rather some specific ones. We will conclude that, somewhat tautologically, topologies which appear as alternative saddles in the replica path integral must transform into each other through an appropriate time evolution.Hawking's original calculation <cit.> of the entropy of the black hole radiation predicts a monotonous growth of this entropy until the moment black hole evaporates completely, producing a mixed final state from a pure original one, contradicting unitarity. Page <cit.> assumed instead that the evolution is unitary and the state of the full system remains pure through the process, which implies that the entropy of the radiation must vanish at the end of the evaporation. In other words, some late time effect should modify Hawking's prediction, leading to zero entropy in the final state. Recently it was found <cit.> that the generalized entropy involving quantum extremal surfaces <cit.> reproduces the behavior of the entropy predicted by Page. It was then argued <cit.> that the decaying behavior of the entropy at late times is due to the dominance of nontrivial topologies in the path integral – replica wormholes.Hawking's calculation deals with quantum fields in a fixed gravitational background. Its result can be compared with the analysis in section <ref>, where a power of the density matrix, which can be thought as of density matrix of the radiation is computed by a simple concatenation, cf. (<ref>). Let us schematically denote the state of the entangled Hawking pairs as|Ψ⟩ =[ [line width=2,orange] (0,0) – (1.,0);[line width=2,black] (2,0) – (1,0);[orange] (0,0) circle (0.1);[black] (2,0) circle (0.1); ], where the black dot represents quanta in the interior. Then, the traces of the reduced density matrix of the radiation and its powers are given by ρ ∼ [ [line width=2,orange] (1,0) – (0.5,0) arc (270:90:0.25) – (1,0.5);[line width=2,black] (1,0) – (1.5,0) arc (-90:90:0.25) – (1,0.5); ] , ρ^2∼ [ [line width=2,orange] (1,0) – (0.5,0) arc (270:90:0.25) – (2,0.5) arc (90:-90:0.25) – (1,0); in 0,0.5,...,0.5[line width=2,black] (1,0+) – (1.5,0+); ] , ρ^3∼ [ [line width=2,orange] (2,0) – (0.5,0) arc (270:90:0.25) – (2,0.5); in 0,0.5,...,0.5[line width=2,black] (1,0+) – (1.5,0+);[line width=2,black] (2,0) arc (-90:90:0.25); ]… In accordance with the discussion in the earlier parts of this work, such replicas would count the quanta entangled across the horizon in the classical limit. Consequently, the entropy will increase monotonously with the number of emitted quanta.Replica wormholes are topologies different from those appearing in (<ref>). They are said to correspond to topologies connecting different replicas. Naively, one can imagine that a generic replica wormhole topology is something like schematically illustrated by the following diagrams (here we consider examples of n=2 and n=4): [ [line width=2,orange] (3.0,0) – (3.5,0) arc (-90:90:0.5) – (3.0,1);[line width=2,orange] (1,0) – (0.5,0) arc (270:90:0.5) – (1,1);[line width=2,black,opacity=0.5] (1,0) – (3,0) arc (270:90:0.5) – (1,1) arc (90:-90:0.5); ] , [ [line width=2,orange] (3.5,0) – (0.5,0) arc (270:90:0.5) – (3.5,1) arc (90:-90:0.5); in 0,1,...,1[line width=2,white] (1,0+) – (1.75,0+);in 0,1,...,1[line width=2,white] (2.25,0+) – (3.0,0+); in 0,1,...,1[line width=2,black,opacity=0.3] (1,0+) – (1.75,0+);in 0,1,...,1[line width=2,black,opacity=0.3] (2.25,0+) – (3.0,0+); [line width=2,black,rounded corners=2,opacity=0.3] (1.,0) – (1.25,0) – (1.25,1) – (1,1); [line width=2,black,rounded corners=2,opacity=0.3] (2.25,0) – (2.5,0) – (2.5,1) – (2.25,1); [line width=2,black,rounded corners=2,opacity=0.3] (1.75,0) – (1.5,0) – (1.5,1) – (1.75,1); [line width=2,black,rounded corners=2,opacity=0.3] (3.0,0) – (2.75,0) – (2.75,1) – (3.,1); [line width=2,rounded corners=2,black,opacity=0.3] (2.25,0) – (2.25,0.15) – (1.75,0.15) – (1.75,0.); [line width=2,rounded corners=2,black,opacity=0.3](3.0,0) – (3.0,0.25) – (1.,0.25) – (1.,0.); [line width=2,rounded corners=2,black,opacity=0.3] (2.25,1) – (2.25,0.85) – (1.75,0.85) – (1.75,1); [line width=2,rounded corners=2,black,opacity=0.3](3.0,1) – (3.0,0.75) – (1.,0.75) – (1.,1); [line width=2,rounded corners=2,black,opacity=0.3](3.0,0) – (3.0,0.65) – (1.75,0.65) – (1.75,1); [line width=2,rounded corners=2,black,opacity=0.3](1.0,0) – (1.0,0.35) – (2.25,0.35) – (2.25,1); [line width=2,rounded corners=2,black,opacity=0.3](2.25,0) – (2.35,0.) – (2.35,0.45) – (1,0.45) – (1,1); [line width=2,rounded corners=2,black,opacity=0.3](3,1) – (2.9,1) – (2.9,0.55) – (1.75,0.55) – (1.75,0); ] .In a generic case, like shown above, the beginning (end) of each replica is connected by some number of links with the ends (beginnings) of all the remaining replicas. Now we are going to argue that generic replica wormholes should not appear in the calculation. Chern-Simons theory per se, does not have a dynamical Hamiltonian that could evolve one entangled state into another. So the situation is more similar to Hawking's fixed background scenario. On the other hand, recently proposed models, which consider replica wormholes, do have a dynamical evolution, in which the black hole degrees of freedom dissipate energy into radiation <cit.>. After all, this dissipation is a transfer of quanta from the interior to the exterior. In terms of the discussion in this work this process can be illustrated as follows:[ [thick]in 0.0,0.3,...,1.0 (0+,) – (0.4+,) arc (-90:90:0.1cm) – (0+,+0.2);in 0,0.3,...,1[black] (0+,) circle (0.04cm);in 0.2,0.5,...,1.4[black] (0+,) circle (0.04cm);in 0,0.3,...,1[gray] (+,) circle (0.04cm);in 0.2,0.5,...,1.4[gray] (+,) circle (0.04cm); ] [ [thick]in 0.3,0.6,...,0.7 (0+,) – (0.4+,) arc (-90:90:0.1cm) – (0+,+0.2);in 0,0.2,...,0.2 (0+,) – (+,);in 0,0.3,...,0.7[black] (0+,) circle (0.04cm);in 0.9,1.1,...,1.1[gray] (0+,) circle (0.04cm);in 0.2,0.5,...,1.0[black] (0+,) circle (0.04cm);in 0.3,0.6,...,1[gray] (+,) circle (0.04cm);in 0.5,0.8,...,1.4[gray] (+,) circle (0.04cm);in 0.0,0.2,...,0.2[black] (+,) circle (0.04cm); ] [ [thick]in 0.3,0.6,...,0.7 (0+,) – (-0.4+,) arc (270:90:0.1cm) – (0+,+0.2);in 0,0.2,...,0.2 (0+,) – (+,);in 0,0.3,...,0.7[black] (0+,) circle (0.04cm);in 0.9,1.1,...,1.1[gray] (0+,) circle (0.04cm);in 0.2,0.5,...,1.0[black] (0+,) circle (0.04cm);in 0.3,0.6,...,1[gray] (+,) circle (0.04cm);in 0.5,0.8,...,1.4[gray] (+,) circle (0.04cm);in 0.0,0.2,...,0.2[black] (+,) circle (0.04cm); ] initial state early time late timeThis is equivalent to moving the partition interior-exterior, what happens in the generalized entropy prescription, which moves the quantum extremal surfaces. The prescription itself does not provide the Hamiltonian, but certain models, at least in low dimensions, determine the position of the extremal surface dynamically <cit.>.It is concluded that the nontrivial extremal surface corresponds to a classical geometry whose topology is a replica wormhole. The diagrams in formula (<ref>) show, however, that what looks like a wormhole is in fact an ordinary replicated time evolved density matrix. Indeed, if one compares the reduced density matrices of the radiation (right hand side degrees of freedom in (<ref>)) of the early and late time states one will find something of the following kind: [ [thick]in 0.3,0.6,...,0.7 (0+,) – (-0.4+,) arc (270:90:0.1cm) – (0+,+0.2);in 0,0.2,...,0.2 (0+,) – (+,);in 0,0.3,...,0.7[black] (0+,) circle (0.04cm);in 0.9,1.1,...,1.1[gray] (0+,) circle (0.04cm);in 0.2,0.5,...,1.0[black] (0+,) circle (0.04cm);in 0.3,0.6,...,1[gray] (+,) circle (0.04cm);in 0.5,0.8,...,1.4[gray] (+,) circle (0.04cm);in 0.0,0.2,...,0.2[black] (+,) circle (0.04cm); in 0.3,0.6,...,0.7 (0+,) – (0.4+,) arc (-90:90:0.1cm) – (0+,+0.2);in 0,0.2,...,0.2 (0+,) – (+,);in 0,0.3,...,0.7[black] (0+,) circle (0.04cm);in 0.9,1.1,...,1.1[gray] (0+,) circle (0.04cm);in 0.2,0.5,...,1.0[black] (0+,) circle (0.04cm);in 0.3,0.6,...,1[gray] (+,) circle (0.04cm);in 0.5,0.8,...,1.4[gray] (+,) circle (0.04cm);in 0.0,0.2,...,0.2[black] (+,) circle (0.04cm); ] and [ [thick]in 0.3,0.6,...,0.7 (0+,) – (-0.4+,) arc (270:90:0.1cm) – (0+,+0.2);in 0,0.2,...,0.2 (0+,) – (+,);in 0,0.3,...,0.7[black] (0+,) circle (0.04cm);in 0.9,1.1,...,1.1[gray] (0+,) circle (0.04cm);in 0.2,0.5,...,1.0[black] (0+,) circle (0.04cm);in 0.3,0.6,...,1[gray] (+,) circle (0.04cm);in 0.5,0.8,...,1.4[gray] (+,) circle (0.04cm);in 0.0,0.2,...,0.2[black] (+,) circle (0.04cm); in 0.3,0.6,...,0.7 (0+,) – (0.4+,) arc (-90:90:0.1cm) – (0+,+0.2);in 0,0.2,...,0.2 (0+,) – (+,);in 0,0.3,...,0.7[black] (0+,) circle (0.04cm);in 0.9,1.1,...,1.1[gray] (0+,) circle (0.04cm);in 0.2,0.5,...,1.0[black] (0+,) circle (0.04cm);in 0.3,0.6,...,1[gray] (+,) circle (0.04cm);in 0.5,0.8,...,1.4[gray] (+,) circle (0.04cm);in 0.0,0.2,...,0.2[black] (+,) circle (0.04cm); ] . One should focus on the structure of the open ends of these density matrices. At early times, what one would see, is a classical topology corresponding to a (maximally) entangled configuration with entropy growing with time, as loops in the middle open and reconnect with the boundaries. At late times one would see more open ends, some of which, when concatenated, appearing like replica wormholes connecting the replicas in a cyclic fashion preserving the replica symmetry. Specifically, for n=2, one would find the same picture as illustrated by the first diagram in formula (<ref>).In the picture just described the transition between early and late times occurs continuously via the same dynamics moving endpoints of the lines from one side to the other. The precise dynamics may be complicated. In the context of topological theories one can imagine the dynamics introducing nonlocal braiding of the lines, which also affects the entropy <cit.>. So in general, the quantities computing the entropy will be calculated by sums over different topologies. As an example, lets us think of state (<ref>). Any power of the reduced density matrix of this state can be expanded as ([ [thick](0,0) – (1.8,0);(0,0.3) – (1.8,0.3);(0,0.6) – (0.5,0.6) arc (-90:30:0.15cm);(0,0.9) – (0.5,0.9) arc (90:60:0.15cm);(0.9,0.6) – (0.7,0.6) arc (-90:-120:0.15cm);(0.9,0.6) – (1.1,0.6) arc (-90:-60:0.15cm);(0.9,0.9) – (0.7,0.9) arc (90:210:0.15cm);(0.9,0.9) – (1.1,0.9) arc (90:-30:0.15cm);(1.2+,0.9) – (0.7+,0.9) arc (90:120:0.15cm);(1.2+,0.6) – (0.7+,0.6) arc (-90:-210:0.15cm); [black] (1.2+,0.0) circle (0.05cm); [black] (1.2+,0.3) circle (0.05cm); [black] (1.2+,0.6) circle (0.05cm); [black] (1.2+,0.9) circle (0.05cm); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); ])^n=α[ [thick] [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm);(0,0) – (0.4,0) – (1.2,0.0);(0,0.6) – (0.4,0.6) arc (-90:90:0.15cm) – (0,0.9); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm);(1.2,0.3) – (0,0.3);(1.2,0.6) – (0.8,0.6) arc (-90:-270:0.15cm) – (1.2,0.9); ]+β[ [thick](0,0) – (1.2,0);(0,0.6) – (1.2,0.6);(1.2,0.3) – (0,0.3);(1.2,0.9) – (0,0.9); [black] (0,0.0) circle (0.05cm); [black] (0,0.3) circle (0.05cm); [black] (0,0.6) circle (0.05cm); [black] (0,0.9) circle (0.05cm); [black] (1.2,0.0) circle (0.05cm); [black] (1.2,0.3) circle (0.05cm); [black] (1.2,0.6) circle (0.05cm); [black] (1.2,0.9) circle (0.05cm); ] ,with some easily computable coefficients α and β. The result is a sum of connected and disconnected topologies. Moreover, the fact that the disconnected topology is n-factorizable is not very obvious without some additional information about the nature of these quantum states. A purifying evolution must be some nonlocal operation, which guaranties that the weight of the disconnected topology dominates at late times.§ CONCLUSIONS In this work we revisited construction andproperties of quantum states in Chern-Simons theory with SU(2) gauge group. We used axiomatic TQFT approach, to represent quantum states by topological spaces and to develop intuition about their properties from the topological properties of spaces, such as connectivity.As a paradigmatic example of quantum states we considered a class characterized by 3-manifolds limited by a pair of S^2 boundaries. The space connectivity was controlled by the wiring of Wilson lines connecting the punctures on the boundary spheres, closed Wilson loops in the bulk and global three-dimensional defects in the manifold topology. We studied different examples of quantum states in this family, illustrating how quantum entanglement depends on the way the space is connected. A general lesson learned about all the examples is that a lack of connectivity of space results in a lack of quantum entanglement between the subsystems, where by lack of connectivity we mean not only truly disconnected topologies, but also topologies with a complex connectivity. In other words, more holes – less entanglement.A central role in our discussion was played by the connectome states, which were introduced as states, or equivalence classes of states, with the simplest possible topology in a given finite-dimensional Hilbert space. In a bipartite system, such as the one characterized by two disconnected spacetime boundaries the connectome states correspond to planar graphs with two vertices, match the SLOCC classes of bipartite entanglement and represent states with the maximal entanglement in each class. They also provide a basis for all other Chern-Simons states. Our primary interest in the connectomes was due to their similarity with the holographic states. Their first relation is through the discrete analog of the RT formula for the entanglement entropy satisfied by the connectome states. In the present context the RT formula states that the entanglement entropy of a subsystem (one of the boundaries) is given by the logarithm of the dimension of the Hilbert space associated with a closed bulk surface encircling the subsystem in such a way that it is crossed by the minimum number of Wilson lines. In the special case, when the number of the Wilson lines is very large, the entanglement entropy is approximately equal to the number of lines times log 2.The second relation with holography is the fact that in the classical limit, all quantum state of Chern-Simons with global S^3 topology reduce to the connectome states. So connectomes are the classical topologies of Chern-Simons theory, as holographic states are those dual to classical geometries. In fact classical limit can be used as a definition of the connectome states. A more fundamental relation of the connectomes and the holographic states is the conjecture discussed in <cit.> that the connectomes satisfy the same inequalities for the entanglement entropy as the holographic states. We do not discuss the details of the inequalities here, but we clarify that the discussion of the inequalities is valid in the double limit, with large Chern-Simons coupling k and large number of Wilson lines.We also observed that states with defects in the global topology, or equivalently, states with global topology different from S^3, do not reduce to connectome states in the classical limit. We advocated that such states can be viewed as nonperturbative topologies compared to perturbative connectomes. Using surgery we showed that states with defects are ordinary connectome states with a very large number of quantum fluctuations that “condense” producing defects in the classical limit. We discussed how the RT formula works in the presence of defects, since the definition of the formula given above does not always produce the expected result. We showed that the RT formula works if the defects also have a large number of Wilson lines wound around them. This is also reminiscent of the situation in holography, where for classical formulas to work the space needs to have small curvature (or the dual theory to have very large number of degrees of freedom).We expect that the ideas and results of this work are useful in the discussion of the emergence of spacetime from entanglement. Here both entanglement and connectivity are supported by the Wilson lines linking different parts of the system. Wilson loops and nontrivial tangling of the Wilson lines appear as quantum fluctuations of the spacetime, and when such fluctuations are many, they can collapse and form an object modifying spacetime topology, like a black holes or a wormhole.We also hope that the TQFT approach or intuition can help in the study of quantum gravity effects. Here we attempted to understand whether such objects as replica wormholes exist beyond the gravitational theories. Our conclusion was that replica wormholes are quite general and we find analogs of them in Chern-Simons. Our analysis suggests that contrary to some statements that path integral must include contribution of all possible topologies, only special topologies contribute. In particular, replica symmetry must be present in the full path integral. Which replica wormholes contribute at different stage of the evolution depends on the specific Hamiltonian. In the context of the black hole information paradox this means that replica wormholes by themselves cannot be a solution, without the knowledge of the Hamiltonian. Acknowledgments The author would like to thank the ICTP-SAIFR for support and hospitality during the workshop Holography@25 and the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme "Black holes: bridges between number theory and holographic quantum information", where work on this paper was undertaken. This work was supported in part by Simons Foundation award number 1023171-RC, grants of the Brazilian National Council for Scientific and Technological Development (CNPq) number 308580/2022-2 and 404274/2023-4, grant of the Serrapilheira Institute number Serra R-2012-38185 and the EPSRC grant number EP/R014604/1.JHEP ] ] | http://arxiv.org/abs/2312.16683v1 | {
"authors": [
"Dmitry Melnikov"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231227185600",
"title": "Connectomes as Holographic States"
} |
The MINERvA Collaboration Accelerator based neutrino oscillation experiments seek to measure the relative number of electron and muonat different L/E values. However high statistics studies of neutrino interactions are almost exclusively measured using muonsince the dominant flavor of neutrinos produced by accelerator based beams are of the muon type.This work reports new measurements of electroninteractions in hydrocarbon, obtained by strongly suppressing backgrounds initiated by muon flavor .Double differential cross sections as a function of visible energy transfer, E_avail, and transverse momentum transfer, p_T, or three momentum transfer, q_3 are presented. Measurement of Electron Neutrino and Antineutrino Cross Sections at Low Momentum Transfer L. Zazueta January 14, 2024 =========================================================================================§ INTRODUCTIONPredictions of interactions of GeV energywith nuclear targets present challenges for experiments seeking to precisely measure neutrino flavor oscillations.Both the DUNE <cit.> and Hyper-Kamiokande <cit.> experiments are designed to measure muon to electron neutrino flavor transitions with uncertainties on the order one percent. Consequently accurate predictions of detector efficiencies, backgrounds, energy reconstruction, and cross sections, which make use of the measurements presented here, are needed. Compounding the problem for DUNE and Hyper-Kamiokande is that their near detectors, which are used for studying neutrino interactions, see primarily a flux of muon neutrinos and only a small fraction of electron neutrinos.Therefore constraints that are solely derived from measurements of muon neutrino interactions will need to be theoretically corrected for electron neutrinos.Electron-neutrino and muon-neutrino charged current cross sections differ for two reasons.First, the tensor structure of the hadronic current and its contraction with the lepton current yields terms that depend explicitly on the square of the lepton mass compared to combinations of the target mass, neutrino energy, and energy transfer <cit.>.These terms are largely negligible for electron neutrinos at accelerator energies, however they may provide non-negligible sub-leading corrections for muon neutrinos at low neutrino energies or energy transfers. Secondly, momentum and energy transfer limits for electron and muon neutrino interactions differ because the kinematic limits in momentum and energy transfer are a function of lepton mass.If the energy and three-momentum transfer from the incoming neutrino to the final state lepton are denoted q_0 and q_3 respectively, conservation of energy and momentum require that E_ν-√(E_ν^2-2 E_ν q_0-m_l^2+q_0^2)<q_3 < E_ν+√(E_ν^2-2 E_ν q_0-m_l^2+q_0^2),where E_ν is the energy of the incoming neutrino and m_l is the final state lepton mass.A second constraint comes from the relationship between initial and final state target invariant masses.If we denote the initial invariant mass M and the final mass M+Δ, then this implies a maximum three momentum transferq_3 ≤ 1/2 M (2 E_ν+M)×( 2 E_ν^2 M-Δ ^2 E_ν. - . 2 ΔE_ν M+E_ν m_l^2 +(E_ν+M)√(η)) , where η ≡4 E_ν^2 M^2-4 E_ν M (Δ(Δ +2 M)+m_l^2)+ (m_l^2-Δ^2 )(m_l^2-(Δ +2 M)^2).Expanding in m_l, we seeq_3 ≤ (E_ν-Δ(Δ +2 M)/2 M) - m_l^2 (E_ν+M/2 E_ν M-Δ(Δ +2 M)-1/2 M)+ O(m_l^4),A simple case in (<ref>) is where Δ=0, which is approximately correct for elastic and quasielastic scattering from nucleons, gives a limit of q_3<E_ν-m_l^2/2E_ν. Both (<ref>) and (<ref>) show that increasing m_l eliminates regions of allowed energy and momentum transfer.Since it is the reactions of muon neutrinos that are studied with high statistics in near detectors, the extrapolation into certain regions of energy and momentum transfer are not well explored experimentally. Another effect recently discussed in the literature is that of radiative corrections which have a strong dependence on the mass of the final state lepton <cit.>.This cited work concludes that the effect of radiative corrections can be precisely predicted, although those predictions are not currently in neutrino interaction models used by experiments.The effects of electron and muon neutrino interaction differences have been studied within specific models of neutrino interaction cross sections on nucleons and nuclei <cit.>.Such studies illuminate possible differences between electron and muon neutrino interactions but are not exhaustive. With sufficient statistics, and with strong rejection and control of backgrounds, it is possible to directly measure the interactions of electron neutrinos and antineutrinos.This paper describes such a measurement with the MINERvA detector <cit.> using the broadband NuMI <cit.> beam located at Fermi National Accelerator Laboratory.To keep backgrounds low and well-controlled, the measurement is performed at energy and momentum transfers less than ∼ 1 GeV, much less than the incoming neutrino energy.A complication in these measurements is that energy transfer cannot be directly measured in a broadband neutrino beam, but instead must be inferred from the visible recoil products in the detector.To approximate what is measured calorimetrically, we employ a proxy used in previous MINERvA measurements <cit.>, i.e. replace q_0 in the measurement by the quantity E_avail, defined asE_avail≡∑_protons T_p +∑_π^± T_π^± + ∑_π^0 E_π^0,where the sums are over final state particles, and T_X indicates the kinetic, rather than total energy E_X of a final state particle X.The weak decay products of strange, or heavier quark, baryons are included in the sum by adding their total energies to the sum, and by subtracting (or adding) a nucleon mass in the case of baryons (anti-baryons). For scattering from nuclei, this quantity differs from q_0 in that it does not have the kinetic energy of final state neutrons nor the rest mass of charged pions, and ignores any additional excitation energy or mass differences in the final state nuclear system. §.§ Past Results on Electron Neutrino Interactions at GeV Energies Because of the relatively small number of electron neutrinos in GeV energy accelerator beams and high backgrounds, previous measurements are few and are often statistics and background limited.Qualitatively, previous measurements fall into several categories.Some measurements have measured only flux integrated total cross sections, or total cross sections as a function of derived neutrino energy, with relatively low statistics, from tens to a few hundreds of events <cit.>. These low statistics measurements are unlikely to constrain electron neutrino interaction models to the level needed by future appearance oscillation experiments.The T2K and MicroBooNE experiments have produced measurements of flux-integrated lepton kinematics for samples of order a hundred or few hundred events <cit.> that can be compared to models and could be sensitive to large deviations, >10%.Several measurements have additional capabilities to test models.The NOvA experiment has made a high statistics measurement of cross sections as a function of lepton kinematics <cit.>, with nearly 10^4 signal events with good purity, but makes its measurements in relatively wide bins of lepton energy and angle.MINERvA <cit.> and MicroBooNE <cit.> have measured events without final state pions, a sample presumably dominated by single and multi-nucleon knockout, with good purity, and with samples of order 10^3 and 10^2 events, respectively.These samples complement the measurements reported here because of their sensitivity to this exclusive and important reaction channel.The measurement reported in this article, by contrast to these previous results, is high statistics with tens of thousands of signal events in each of the ν_e and ν̅_e samples. In addition it measures correlations between lepton kinematics (as measured primarily by p_T and the derived three momentum transfer, q_3) and a variable which is visible energy transfer to the final state.It is inclusive, although in the lower visible energy and momentum transfer regions. It is dominated by single and multi-nucleon knockout events, and reports over a wide range of momentum transfer, up to 1.6 GeV in transverse momentum transfer, p_T or 1.2 GeV in inferred q_3. § MINERVA EXPERIMENT AND NUMI BEAM LINE The MINERvA detector is a fine-grained tracking calorimeter with a fully active solid-scintillator tracker forming the bulk of the Inner Detector (ID). Upstream of the tracker is an area of nuclear targets – carbon, water, iron, and lead, interleaved with tracking planes. The downstream part of the ID contains Electromagnetic CALorimeter (ECAL) and Hadronic CALorimeter (HCAL).The ID is surrounded by side ECAL and side HCAL.Downstream, the MINOS Near Detector served as a muon spectrometer for MINERvA. Muon charge and momentum measurements were provided for muons with momenta above ∼1.5 GeV/c. The active detector elements are solid-scintillator strips of triangular cross section, with a 3.3 cm base, 1.7 cm high, arranged in planes where neighboring strips alternate orientation with respect to the beam. Charge sharing between neighboring strips provides a spatial resolution of ∼3 mm.Scintillation light due to a charged particle traversing the scintillator is collected by a wavelength shifting fiber located at the center of each strip and routed through clear optical fibers to M64 Hamamatsu photo-multiplier tubes (PMT). The electrical signals from Front End Boards mounted on top of each PMT box are readout via the data acquisition system.The detector consists of hexagonal modules containing one or two active planes mounted on a steel frame. The orientation of strips in the planes can be vertical (X), +60^∘ (U), or -60^∘ (V). Four types of modules were built: (i) tracker modules with strip orientations X+U, X+V; (ii) ECAL with 0.2cm lead sheets, plus planes with strip orientations X+U or X+V; (iii) HCAL with a 2.54cm thick iron plate, and plane with strip orientation X, U, or V; and (iv) Target modules with passive carbon, water, iron or lead targets. MINERνA utilizes the intense broadband NuMI (Neutrinos at the Main Injector) beam running at FNAL. FNAL's Main Injector accelerates protons up to 120 GeV which are directed to a carbon target. The pions produced by the proton interactions on the carbon target are focused by two horns and allowed to decay in a 675 meter decay pipe. Undecayed pions are absorbed in the hadron absorber just downstream of the decay region and the decay muons are absorbed in the following 240meters of rock before reaching the detector hall.Different energy tunes are available by varying the location of target and horns. The two main configurations are known as the Low Energy (LE) and Medium Energy (ME) beam tunes. The results presented here are based on the ME configuration.NuMI also allows for neutrino or anti-neutrino beam running by sign selecting pions and kaons by setting the magnetic horn current direction. The forward horn current (FHC) polarity produces predominantly muon neutrinos. The reverse horn current (RHC) polarity produces predominantly muon antineutrinos. However both FHC and RHC contain anti-neutrino and neutrino contamination, respectively, on the order of a few percent. This cross-contamination results from kaon decay producing neutrinos at the higher end of the energy spectrum and from muon decay that creates neutrinos at the lower end. The neutrino flux prediction (see Fig <ref>) used by MINERvA is derived from a Geant4 simulation of the NuMI beamline which is constrained by measurements from neutrino-electron elastic scattering<cit.>.§ NEUTRINO INTERACTION SIMULATIONNeutrino-nucleus interactions are simulated using GENIE v2.12.6. GENIE<cit.> is a Monte Carlo (MC) neutrino interaction event generator which simulates multiple neutrino-nucleus interaction channels exclusively, including the three primary channels – charged current quasielastic (CCQE), resonant pion production (RES), and deep inelastic scattering (DIS) - as well as subdominant channels such as charm and coherent pion production.The quasielastic interactions are simulated using the Llewellyn-Smith formalism<cit.> with BBBA05 vector form factor modeling<cit.>, where the axial form factor uses the dipole form with an axial mass of M_A = 0.99 GeV. The Rein-Sehgal model<cit.> with axial mass of M_A^RES = 1.12 GeV is employed to simulate resonance productions. DIS interactions are simulated using the leading order model with the Bodek-Yang prescription<cit.>. In addition, "two particle two hole" (2p2h) interactions are simulated using the Valencia model<cit.> and coherent pion production is simulated by the other Rein-Sehgal model<cit.>. The nucleon initial states are simulated using the relativistic Fermi gas model<cit.> with additional Bodek-Ritchie tail<cit.> while the FSI is simulated using the INTRANUKE-hA package<cit.>, which is a hadronic cascade model. To better describe MINERvA data, there are tunes applied to the prediction of CCQE, RES, and 2p2h interactions, collectively referred to as MINERvA tune v1, and described in Ref. <cit.>.The coherent channel of GENIE does not simulate coherent scattering off hydrogen atoms, e.g., diffractive pion production. However, MINERvA data from its low energy beam showed that the contribution of the neutral current (NC) diffractive process is sizable<cit.>. In order to simulate this process, the charged current (CC) diffractive model in GENIE, which is an implementation of the work by Rein<cit.>, is used with two modifications to turn the CC model into an NC model by producing a neutrino in the final state, reducing the cross section by a factor of two for the expected CC/NC ratio.§ ELECTRON RECONSTRUCTION AND IDENTIFICATIONThe reconstruction method employed in this analysis is a combination of the MINERvA electron neutrino CCQE measurement preformed with the Low Energy (LE) dataset<cit.> and the muon neutrino low-recoil analyses<cit.>. The electron candidates are reconstructed using the same method as used in the LE analysis but updated with re-tuned algorithms to cluster hits from neutrino interactions in time and with tracking improvements. An inclusive electron neutrino charged-current interaction sample is selected using the following four cuts. First, events with any MINOS matched tracks are rejected. Second, an electron candidate is constructed for each track that originated from the most upstream vertex and is contained in the MINERvA detector. The hits are considered as part of electron candidate if they are inside a 7.5-degree cone region with an apex at the event vertex and axis along the track direction or a cylindrical region of 50 mm radius extending from the event vertex along the track direction. If there is more than a three radiation length separation between a hit and the next downstream hit, this upstream hit will be tagged as the most downstream hit considered part of the electron candidate. Third, the collection of hits considered as the electron candidate is tested by a k-nearest-neighbor classifier using three variables: mean dE/dx, the fraction of energy deposited at the downstream end, and the median shower width <cit.>. The classifier is trained to distinguish electromagnetic showers from track like particles using simulated single particle samples including electrons, muons, photons, changed pions, and protons. Events are selected if there is at least one electron candidate having a kNN score greater than 0.7. Lastly, the energy of the electron candidate is measured by employing a calorimetric sum of hit energies, corrected for passive materials. We choose the most energetic electron candidate as the primary candidate if multiple candidates pass the threshold. §.§ Electron and Photon SeparationAdditional selections are necessary because the kNN classifier is not optimized to distinguish between electrons and photons. We use the minimal energy deposition in a 100 mm sliding window from 25 mm to 500 mm downstream of the event vertex (measured along track direction) of the electron candidate as the discriminator and require the dE/dx in the minimal window less than 2.4 MeV/cm (see Fig. <ref>) to be considered as an electron. In addition, we reject events with multiple vertices since they are more likely to be a neutral-current interaction. §.§ Reconstruction of Visible Calorimetric Energy The visible calorimetric energy is calculated as described in the MINERvA muon neutrino low-recoil measurement<cit.>. We assume that hits that are not included in the electron candidate are the result of energy deposited by the hadronic system. These hits are summed using the calibrated visible energy in each sub-detector and corrected for passive material. We construct two hadronic energy estimators using the visible energy in each sub-detector to estimate q_0 and E_avail separately. The energy transfer q_0 is estimated by summing the visible energies in all sub-detectors and applying a spline correction to offset the bias observed by a MC study.E_avail is estimated (see Eq. <ref>) by summing up visible energies in the tracker and ECAL and applying a flat scale factor. We construct these variables such that the reconstructed E_avail is more precise but less accurate while the reconstructed q_0 is more accurate but less precise. In addition, we estimate that 0.8% of the EM shower energy leaks out of the electron candidate on average using a simulation study and we therefore correct the EM shower energy leakage from the reconstructed E_avail and q_0. Finally, we reconstruct q_3 using lepton kinematics and reconstructed q_0:q_3= √(Q^2 + q_0^2)Q^2= 2 (E_l + q_0) (E_l-|p⃗_⃗l⃗|cos(θ_l)) - m_l^2 § BACKGROUNDS AND CONTROL SAMPLESFigure <ref> shows the distribution of the mean dE/dx quantity described above for both data and simulation. There is a large excess of data events in the background dominated region with mean dE/dx > 2.4 MeV/cm. This excess is similar to what was reported in MINERvA's LE data<cit.> and with the conclusion that it may be explained through diffractive pion production. NC diffractive π^0 production is similar to NC coherent π^0 production in that both are inelastic processes where a lepton and a pion are produced in the forward direction while leaving the struck nucleus in the ground state. The square of the four-momentum transfer, | t |, must be small to preserve the initial state of the nucleus. Since MINERvA’s tracker material is a CH-based hydrocarbon, there is a possibility a neutrino interaction will occur on a free proton, referred to as diffractive pion production.Since the proton is much less massive than a carbon nucleus, the proton recoils visibly from the momentum imparted to the target proton in the MINERvA detector. The recoiling proton deposits its energy upstream of the “vertex” where the π^0 is identified as an electron candidate. There exists a model for an NC Diffractive scattering process in GENIE, from the work of Rein <cit.>, that is valid for W > 2.0 GeV. GENIE's implementation of the Rein model is used to predict this background contribution.We divide the background processes into two cases. The first case is when a π^0 is the only particle produced in the neutrino interaction which is inclusive of coherent NC pion production and NC diffractive pion production. The second is when photons are produced from a π^0 and additional particles are also produced simultaneously including NC incoherent pion production and non-electron neutrino CC pion production. Given the 2.5 GeV energy requirement of the electron, the NC background typically consists of high hadronic invariant mass W^2 events defined asW^2 = M^2 + 2 M ν,where M is the nucleon mass. The exception are the NC diffractive π^0 and NC coherent π^0 cases. Neutrino-electron elastic scattering also contributes to the signal sample at values of zero E_avail.§.§ Background Constraints The backgrounds are constrained by examining sidebands in the high dE/dx region which is further subdivided into three separate regions used to separate π^0 production channels. We use two variables to define the sidebands: upstream inline energy E_ UIE and extra energy Ψ E_EM. Upstream inline energy is defined as the energy depositions inside of a reversed 7.5 ^∘ cone region, as shown in Fig. <ref>. E_ UIE is the best discriminator between NC π^0 coherent and NC π^0 diffractive events, allowing for the capture of recoiling proton energy upstream of the event vertex. Ψ E_EM is defined by the ratio of visible energy outside of the electron cone to energy inside:Ψ = E_extra + E_ UIE/E_EMThe division of the sidebands regions are shown in Fig. <ref> andsummarized as the incoherent π^0 region defined by ψ E_EM > 0.5 GeV and dE/dx > 2.4 MeV/cm, the coherent region defined by ψ E_EM < 0.5 GeV, E_ UIE < 10 MeV and dE/dx > 2.4 MeV/cm, and the diffractive region defined by ψ E_EM < 0.5 GeV, E_ UIE > 10 MeV and dE/dx > 2.4 MeV/cm.The normalization of the π^0 backgrounds are each fitted using distributions in both bins of E_avail vs q_3 and E_avail vs p_T to obtain scale factors that represent the best estimate of the normalization of data compared to the GENIE prediction. The signal contribution is also tuned during this global fitting due to its non-negligible contribution in the sideband regions, although that tune is not used in the simulation compared to background subtracted signal. The fitting process, which is done through minimizing the negative log-likelihood assuming Poisson distribution, is done in two steps.The first global fit is done with RHC data in Ψ E_EM vs p_T in bins of E_EM which optimizes the NC coherent and diffractive processes. The background predictions of the coherent and diffractive π^0 processes are updated by applying scale factors on an event-by-event basis. The second global fit is done in E_avail vs p_T in bins of p_T which optimizes non-coherent π^0 and signal processes separately for the respective FHC and RHC samples. The applied scale factors for the FHC and RHC analyses are found in Tables <ref> and <ref> respectively. Pre and post tune distributions in the sideband regions can be found in Figs. <ref> - <ref> for RHC and FHC respectively. §.§.§ Tensions in the FHC ConstraintsAs noted above, the diffractive and NC coherent backgrounds are estimated using the RHC samples because those processes are a larger fraction of the low UIE and high UIE sidebands, respectively in the RHC beam.However, in the high UIE and incoherent π^0 FHC sideband, this fit is unable to reproduce the shape of the data as a function of E_avail, as shown in Fig. <ref>.Additional tunes and a systematic uncertainty on those tunes were developed to address this disagreement.We considered two alternate hypotheses, neither of which describes the data well across all of the sidebands.In the first, NC coherent and diffractive processes are allowed to have an additional normalization in the second global FHC fit.The rationale is that the high energy neutrino components in the two beams, above the focusing peak, are different. This could affect the relative event rates in the FHC and RHC samples if the cross section has a poorly modeled rate as a function of neutrino energy.In the second hypothesis, a subset of the non-coherent π^0 processes that dominates the region with the observed high UIE disagreement (0.2 GeV < E_avail < 0.5 GeV and P_lep^t < 1 GeV) are enhanced independently from other non-coherent π^0 processes with a separate scale factor.We use the average of these two fits as our base background prediction, and take the difference between the two as an assessment of the systematic uncertainty in this procedure.Note that the background comes from all of these contributions together, and so the systematic underprediction of the FHC high UIE sideband coupled with the systematic underprediction of the incoherent π^0 sideband do not indicate that the background is poorly estimated. §.§ Interpretation of the Coherent and Diffractive ContributionsAs shown in Tables <ref> and <ref>, the scale factors for the NC diffractive π^0 background process as well as the NC coherent π^0 process are large. We believe the explanation for these large scale factors is that the diffractive and coherent π^0 processes are not well modeled by our reference GENIE model.We will discuss in turn the evidence that these events are in fact single π^0 in nature, the relative strength of the coherent and diffractive processes, and the reliability of the GENIE model prediction as a function of E_π^0. Measurements of events from the sidebands targeting coherent and diffractive π^0 production, the low UIE and high UIE sidebandsrespectively, do support the hypothesis that these events have a high energy π^0. Since electromagnetic cascades spread out transversely to the direction of propagation, there is a range of energy where single-photon showers can be distinguished from multi-photon showers based on transverse size. Median shower width, or median transverse width, provides the extent to which an electromagnetic cascade spreads transversely to its direction of propagation.Figure <ref> shows the post background tuned energy outside the electron candidate cone and the vertex region, referred to as the extra energy (E_extra). Most events from the high and low UIE sidebands populate the first few bins and are well described. From these distributions, it is apparent that the event has little non-shower activity.Additionally, the post background tuned inline-upstream energy cone distribution, shown in Fig. <ref>, indicates that the shape of the diffractive and coherent processes agree with what we would expect from energy upstream of the event vertex.We note that the relative rate of the diffractive reaction, with high upstream inline energy, and the coherent events, with low upstream inline energy is consistent with naive scaling arguments which would suggest a dependence on the atomic number, A, somewhere between A^1/3 and A^2/3 between carbon and hydrogen.This suggests that the large difference between the two in the GENIE model, approximately a factor of 30, is incorrect.On the subject of the π^0 energy dependence of the scale factors, a separate MINERvA analysis studying neutrino-induced coherent π^+ production on different targets <cit.>concluded that the Rein-Sehgal and PCAC-based Belkov-Kopeliovich (B-K) models do not accurately describe the angular dependence on θ_π, the energy-dependence on E_π, or the A-dependence. The fact that this is also seen in the charged current analog reaction makes the energy dependent scale factors needed in this analysis more plausible. §.§ Background Subtracted Signal DistributionsFigure <ref> shows the lepton p_T distribution for the signal region for both FHC and RHC samples. The FHC sample has approximately 46,700 selected events with a total estimated background of 24,600 events. The RHC sample has approximately 28,300 selected events and a total estimated background of 8,000 events.§ SEPARATION OF Ν_E AND Ν̅_E EVENTS While ν̅_e and ν_e events are indistinguishable in data, the MC simulation provides a prediction for the ν̅_e contribution to the FHC sample and the ν_e contribution to the RHC sample.To correct for the contamination from these events in the respective samples, we form an estimator based on FHC data and the MC simulation that gives a prediction of the ν_e background found in the RHC sample and vice versa. The procedure is identical for the two measurements, so we will describe the procedure to correct the RHC measurement.In this procedure, a corrected FHC sample is used to replace the MC prediction for the ν_e event rate in the RHC sample.First a ratio of RHC/FHC ν_e events distributed in true neutrino energy is formed, seen in Fig. <ref> ν_e, as a function of true neutrino energy. This ratio must be applied to correct the FHC ν_e events to make a prediction for them in RHC. To apply the correction to the data, a neutrino energy estimator is developed out of the reconstructed available energy and the reconstructed electron energy, E_est = E_e + E_availThe accuracy of the formed energy estimator to predict the true neutrino energy for the samples in this analysis is shown in Fig. <ref>. The energy estimator value is then used to correct events on an event-by-event based by the ratio fo the fluxes in two beams shown in Fig. <ref>.A complication is that in the data, there are background contributions to the RHC samples, as well as contributions from ν̅_e. The simulation is used to predict the initial ν̅_e background to the RHC sample, and the other backgrounds are predicted as described above with the tunes to the control samples.Each of these contributions is weighted on an event by event basis by the energy estimator from the reconstruction, whether the source is data or simulation.After this weighting, the RHC ν_e prediction is formed by taking the corrected FHC data, and subtracting the corrected ν_e background prediction and the corrected other sources of background.Because the flux correction is made event-by-event, these samples can be used to predict the background in the measured reconstucted variables.This procedure is iterated once, replacing the initial MC prediction of ν̅_e events with the data corrected version from this procedure.This background is less than a few percent in most bins, and largest in high p_T bins with high E_avail in RHC or low E_avail in FHC.§ MEASUREMENT OF DIFFERENTIAL CROSS SECTIONSCalculation of the flux-integrated differential cross section per nucleon for kinematic variable x, in bins of i, is measured by the following equation:(dσ/dx)_i = ∑_jU_ij(N_j^data-N_j^bkg)/ϵ_i T Φ (Δ x)_iwhere (dσ/dx)_i is the differential cross section as function of x at bin i, U_ij is the unfolding matrix, N_j^data is the measured number of events in bin j of reconstructed variable x, N_j^bkg is the predicted number of background events in bin j, ϵ_i is estimated acceptance at bin i, T is number of nucleon targets, Φ is integrated neutrino flux (or integrated anti-neutrino flux), and (Δ x)_i is bin width normalization of bin i.The double differential cross sections d^2 σ/dE_availdq_3 and d^2 σ/dE_availdp_T are calculated using the selected number of events and subtracting the number of background events predicted by the simulation. An iterative unfolding approach using the D’Agostini<cit.>method as implemented in RooUnfold<cit.>was used to correct the background-subtracted distributions for resolution effects. Several unfolding studies were carried out using randomly thrown pseudodata samples generated from alternate physics models. The number of unfolding iterations is determined through χ^2 values calculated by comparing the unfolded pseudodata with the truth pseudodata. The alternate physics models used in the studies include modification to the 2p2h enhancement to include a reweight giving nn/pp or np pairs additonal strength and a modification of RPA suppression affecting the low Q^2 or high Q^2 regions. The result of the unfolding studies for both the RHC and FHC analyses indicate that a different number of unfolding iterations are required for the two different distributions based on the different minimum χ^2 values. It is decided that E_avail vs p_T will be unfolded with 10 iterations and E_avail vs q_3 unfolded with 15 iterations.The number of events after unfolding is then divided by the efficiency. In this analysis, a few bins on the edge of the phase space contain low statistics, leading to large statistical uncertainty in the efficiency estimation. To account for this, the efficiency in the low statistic bins are estimated by the average of adjacent bins. The RHC efficiency is found in Figs. <ref> and <ref> for the E_avail vs q_3 and E_avail vs p_T distributions respectively.In both cases, the efficiency decreases at higher E_avail values. This is most likely because it is more difficult for the tracking algorithm to reconstruct a proper electron candidate track at higher E_avail due to the greater amount of hadronic activity overlapping with EM showers. The equivalent FHC efficiencies are found in Figs. <ref> and <ref>.The inefficiency for high E_avail events is due to the overlapping of EM showers and hadronic activity.The normalization factors include 3.234× 10^30 nucleon targets and the flux integral from 0 to 100 GeV for a total integrated flux value of Φ=2.34 × 10^12 ν̅/cm^2 for the anti-neutrino analysis and Φ=6.7±0.2 × 10^11 ν/cm^2 for the neutrino analysis. The double differential cross sections d^2 σ/dE_availdq_3 and d^2 σ/dE_availdp_T are found in Figs. <ref> and <ref> for the RHC analysis and Figs. <ref> and <ref> for the FHC analysis. §.§ Systematic Uncertainties Statistical uncertainties dominate systematic uncertainties in nearly every bin of these measurements.Uncertainties on the measured cross sections can be categorized into four major groups: flux, detector model, interaction model, and MINERvA tunes. The breakdown of the RHC fractional systematic uncertainty for d^2 σ/dE_availdq_3 is shown in Fig. <ref> and d^2 σ/dE_availdp_T in Fig. <ref>. The equivalent plots for FHC are shown in Figs. <ref> and <ref> respectively. The uncertainties related to the flux can be broken down into two major categories: focusing uncertainties associated with all components related to the NuMI beam and hadron production uncertainties related to the uncertainty of hadron production from the proton beam incident on the graphite target. The flux uncertainty is fairly constant with E_avail and q_3/p_T, around 4.7%.The detector model uncertainties consist of the uncertainties pertaining to the simulation of particle propagation through the detector, particle and kinematic reconstruction and the particle response of the detector. The detector model uncertainty can be broken into two groups: hadronic energy and electron reconstruction. A systematic uncertainty is assessed on the correction for leakage of electron energy outside of the electron cone. The energy leakage outside the cone leads to an overestimation of the available energy. The energy leakage was estimated to be 0.8% of the electron energy. We estimate the energy leakage by simulating electron initiated showers with various energies and angles.By comparing this simulation to our sample of neutrino-electron elastic scattering (ν + e →ν + e) events, we conclude that the simulation underestimates the energy leakage by 5± 2 MeV, and the 2 MeV uncertainty from this study is the assigned systematic uncertainty.The leading uncertainty in q_3 bins with the highest E_avail is the leakage uncertainty. The highest p_T bin shows large systematic error values, similar to the GENIE error summary, due to the low number of events in that bin. The leakage uncertainty is the leading systematic uncertainty for the lower p_T bins.The interaction model uncertainties encompass GENIE interaction model uncertainties as well as GENIE final state interaction uncertainties. For the RHC analysis, the leading systematic uncertainty for most bins is the axial mass M_A resonance production(MaRES) which adjusts the M_A in the Rein-Sehgal cross section, affecting the shape and normalization. This next leading systematic is the M_V resonance production (MvRES), which adjusts the axial vector mass M_V in the Rein-Sehgal cross section, and the charged current resonance normalizaion (NormCCRES) that implements changes the normalization of CC Rein-Sehgal cross section. As shown in Figs. <ref> and <ref>, the CC resonant pion production has a large contribution to the cross section measurement. Since the GENIE MaRES and MvRES parameters control resonant pion production it is not surprising that they are among the leading contributors to the uncertainty. For the RHC analysis, theprobability for elastic scattering of nucleons while conserving the total rescattering probability (FrInelas_N), contributes to the first bin of q_3 and highest E_avail bin with less contribution for higher q_3 bins. Most likely this would include a neutron losing a large amount of energy in a collision, resulting with a proton in the final state.The systematic error breakdown for the MnvTunes shows the low Q^2 tune affects the highest bin of E_avail for a given q_3 and p_T. This is expected because these are the regions in which the process is most dominant. The low recoil 2p2h tune has a larger systematic uncertainty for values of q_3 compared to p_T. The shifting of the 2p2h model impacts the E_avail distribution and the effects are seen more easily in q_3 due to the model dependency. §.§ Discussion and Interpretation of Anti-neutrino and Neutrino Results The measurement shows more rate than predicted for the anti-neutrino analysis in the first bin of E_avail in the cross section both forE_avail vs q_3 in Fig. <ref> and for E_avail vs p_T as seen in Fig. <ref>. The events predicted to populate the first bin of available energy tend to be events where the final state is neutral, typically final state neutrons, and in the first bin of E_avail ∼ 60-90% of the model prediction is charged current quasielastic events. The quasielastic events are expected to be the dominant contributor to the first E_avail bin because, in the absence of final state interactions, there is only a lepton and neutron in the final state. Looking more closely at the q_3 bin of 0.4-0.6 GeV in Fig. <ref>, there is a population of inelastic events that leak into the first bin of E_avail. It is possible that some type of inelastic events with mostly neutrons in the final state is not being correctly simulated. It also could involve events where the final state pion doesn't have much energy and is absorbed within the nucleus, resulting in only final state neutrons. The last proposal to explain the high cross section in the first E_avail bin is that the MC simulation predicts too many quasielastic events at higher values of E_avail. Increasing the population of quasielastic MC events near zero E_avail would improve this prediction.In contrast to the anti-neutrino results there is a deficit of data events over the simulated prediction for the neutrino analysis found in the first bin of E_avail seen in the cross section plots for both E_avail vs q_3 as shown in Fig. <ref> and E_avail vs p_T (Fig. <ref>) in lowest respective bins. §.§ Comparison to Muon Neutrino and Antineutrino Measurements These results can be compared with MINERvA's measurement of the analogous samples from muon neutrinos and antineutrinos.However, there are differences in the measurements that make a direct comparison challenging.In particular, there all of the measurements of muon neutrino and antineutrino processes are made with neutrino spectra which are substantially different than the ones measured in these results.The ν̅_e cross section result would be most appropriately compared with MINERvA's low recoil LE ν̅_μ result <cit.>. In addition to the flux differences, there are also selection differences between the two analyses, so the signal definitions are not identical. The ν̅_μ result requires a lepton momentum of greater than 1.5 GeV, while the ν̅_e analysis requires lepton energy greater than 2.5 GeV to eliminate a large π^0 background at low electron energy. In addition, the ν̅_e analysis has no scattering angle requirements while the ν̅_μ analysis requires the lepton scattering angle to be less than 20 degrees due to the difficulty of reconstructing high angle muons. The analyses are also reported using different binning. Table <ref> shows a binning comparison between the ME and LE results for E_avail and Table <ref> shows the comparison for q_3.Lastly, the two analyses took different approaches in unfolding. The ν̅_e unfolds using coarse binning and a large number of iterations and the ν̅_μ analysis unfolds using fine binning and a small number of iterations. This is due in large part to the difference in observables. The ME ν̅_e analysis has to account for the energy leakage outside the electron cone and into the available energy. Overall, the LE ν̅_μ has a much better energy resolution compared to the ν̅_e analysis. With the consideration of the differences between the two analyses, the cross section result for the LE ν̅_μ is shown in Fig. <ref> and the relevant cross section bins for the ν̅_e are shown in Fig. <ref>.There are similar features between the two results. As expected in the cross section model prediction, both cross section results have quasielastic events as the dominant contributor in the first bin of E_avail. There is a population of 2p2h events for values of low E_avail. The delta resonance becomes the dominant process at the higher values of E_avail in both predicted cross sections. There is a noticeable difference between the data results for values of ∼ E_avail > 0.2 GeV. The reference MC prediction for the ν̅_μ cross section consistently exceeds the data while the ν̅_e prediction falls below the data. Both cross section results contain many events that have no available energy. This creates a sharp peak at zero E_avail followed by a cross section that falls slowly compared to the size of the peak in the first E_avail bin. Therefore, to compare the first two E_avail bins for both results we assume that the peak at zero E_avail is a Kronecker delta function-like peak and the remaining cross section distribution is flat. To determine the magnitude of the delta function we subtract the second E_avail bin from the first, or the flat distribution from the peak, leaving us with a dσ/dE_avail value. We multiply dσ/dE_avail by the bin width so that we end up with a cross section that is differential in each q_3 bin. This process is repeated for each result's data and MC values. The resultant bin combination for the two samples are 0.0 < E_avail(GeV) < 0.08 for theν̅_e cross section result and 0.0 < E_avail(GeV) < 0.07 for the ν̅_μ cross section result. Tables <ref> and <ref> summarize the results. Tables <ref> and <ref> are the correlation matrices for the reported bins with the correlation matrix ordering defined in Table <ref>.The conclusion drawn from the comparison between the data/MC peak at zero is that the ME ν̅_e result is consistent with the LE ν̅_μ result in all q_3 bins except the first. The ME ν̅_e result has a significant enhancement over the simulation. Similarly, we can in addition compare the ν_e cross section result with MINERvA's low recoil ME ν_μ result<cit.>.As with the comparison above for the ν̅_e, there are significant differences between the ν_e and ν_μ analyses including the flux and features of the reconstruction.The cross section results of(ν_μ ME and ν_e) are compared in Fig. <ref>. We conclude the ν_e result is qualitatively consistent with the ν_μ results, except in the lowest q_3 and E_avail bin where there is some indication of a difference. In this bin the measured ν_e cross section issmaller than the ν_μ cross section, albeit with large uncertainties. This document was prepared by members of the MINERvA Collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. These resources included support for the MINERvA construction project, and support for construction also was granted by the United States National Science Foundation under Award No. PHY-0619727 and by the University of Rochester. Support for participating scientists was provided by NSF and DOE (USA); by CAPES and CNPq (Brazil); by CoNaCyT (Mexico); by ANID PIA / APOYO AFB180002, CONICYT PIA ACT1413, and Fondecyt 3170845 and 11130133 (Chile); by CONCYTEC (Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica), DGI-PUCP (Dirección de Gestión de la Investigación- Pontificia Universidad Católica del Peru), and VRI-UNI (Vice-Rectorate for Research of National University of Engineering) (Peru); NCN Opus Grant No. 2016/21/B/ST2/01092 (Poland); by Science and Technology Facilities Council (UK); by EU Horizon 2020 Marie Skłodowska-Curie Action; by a Cottrell Postdoctoral Fellowship from the Research Corporation for Scientific Advancement; by an Imperial College London President's PhD Scholarship.D. Ruterbories gratefully acknowledges support from a Cottrell Postdoctoral Fellowship, Research Corporation for Scientific Advancement award number 27467 and National Science Foundation Award CHE2039044. S.M. Henry's work was supported in part by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1419118.We thank the MINOS Collaboration for use of its near detector data. Finally, we thank the staff of Fermilab for support of the beam line, the detector, and computing infrastructure. | http://arxiv.org/abs/2312.16631v1 | {
"authors": [
"S. Henry",
"H. Su",
"S. Akhter",
"Z. Ahmad Dar",
"V. Ansari",
"M. V. Ascencio",
"M. Sajjad Athar",
"A. Bashyal",
"M. Betancourt",
"J. L. Bonilla",
"A. Bravar",
"G. Caceres",
"G. A. Díaz",
"J. Felix",
"L. Fields",
"R. Fine",
"P. K. Gaur",
"S. M. Gilligan",
"R. Gran",
"E. Granados",
"D. A. Harris",
"A. L. Hart",
"J. Kleykamp",
"A. Klustová",
"M. Kordosky",
"D. Last",
"A. Lozano",
"X. -G. Lu",
"S. Manly",
"W. A. Mann",
"C. Mauger",
"K. S. McFarland",
"M. Mehmood",
"B. Messerly",
"J. G. Morfín",
"D. Naples",
"J. K. Nelson",
"C. Nguyen",
"A. Olivier",
"V. Paolone",
"G. N. Perdue",
"C. Pernas",
"K. -J. Plows",
"M. A. Ramírez",
"R. D. Ransome",
"N. Roy",
"D. Ruterbories",
"H. Schellman",
"C. J. Solano Salinas",
"V. S. Syrotenko",
"E. Valencia",
"N. H. Vaughan",
"A. V. Waldron",
"C. Wret",
"B. Yaeggy",
"L. Zazueta"
],
"categories": [
"hep-ex"
],
"primary_category": "hep-ex",
"published": "20231227163638",
"title": "Measurement of Electron Neutrino and Antineutrino Cross Sections at Low Momentum Transfer"
} |
http://arxiv.org/abs/2312.15940v1 | {
"authors": [
"Yann Sakref",
"Maitane Muñoz-Basagoiti",
"Zorana Zeravcic",
"Olivier Rivoire"
],
"categories": [
"physics.chem-ph"
],
"primary_category": "physics.chem-ph",
"published": "20231226080829",
"title": "On Kinetic Constraints That Catalysis Imposes on Elementary Processes"
} |
|
.75em t]>p.45 | http://arxiv.org/abs/2312.16314v1 | {
"authors": [
"Kathryn Haymaker",
"Hiram H. López",
"Beth Malmskog",
"Gretchen L. Matthews",
"Fernando Piñero"
],
"categories": [
"cs.IT",
"math.AC",
"math.AG",
"math.IT",
"94B05, 11T71, 14G50"
],
"primary_category": "cs.IT",
"published": "20231226194836",
"title": "Mathematical LoRE: Local Recovery of Erasures: Local recovery using polynomials, curves, surfaces, and liftings"
} |
theoremTheorem[section] acknowledgementAcknowledgement algorithmAlgorithm[section] axiomAxiom[section] caseCase[section] claimClaim[section] conclusionConclusion[section] conditionCondition[section] conjectureConjecture[section] corollaryCorollary[section] criterionCriterion[section] definitionDefinition[section] exampleExample[section] exerciseExercise[section] lemmaLemma[section] notationNotation[section] problemProblem[section] propositionProposition[section] remarkRemark[section] equationsection | http://arxiv.org/abs/2312.16082v1 | {
"authors": [
"Guofeng Zhang",
"Jinghao Li",
"Zhiyuan Dong",
"Ian R. Petersen"
],
"categories": [
"quant-ph",
"cs.SY",
"eess.SY"
],
"primary_category": "quant-ph",
"published": "20231226151000",
"title": "The Quantum Kalman Decomposition: A Gramian Matrix Approach"
} |
IEEE Journal 2023 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsFALCON: Feature-Label Constrained Graph Net Collapse for Memory Efficient GNNs Christopher Adnel and Islem Rekik, Member, IEEE C. Adnel and I. Rekik are affiliated with BASIRA Lab, Imperial-X and Department of Computing, Imperial College London, London, UK. Corresponding author: Islem Rekik, Email: [email protected], <https://basira-lab.com> and <https://ix.imperial.ac.uk/>.=================================================================================================================================================================================================================================================================================================================== Graph Neural Network (GNN) ushered in a new era of machine learning with interconnected datasets. While traditional neural networks can only be trained on independent samples, GNN allows for the inclusion of inter-sample interactions in the training process. This gain, however, incurs additional memory cost, rendering most GNNs unscalable for real-world applications involving vast and complicated networks with tens of millions of nodes (e.g., social circles, web graphs, and brain graphs). This means that storing the graph in the main memory can be difficult, let alone training the GNN model with significantly less GPU memory. While much of the recent literature has focused on either mini-batching GNN methods or quantization, graph reduction methods remain largely scarce. Furthermore, present graph reduction approaches have several drawbacks. First, most graph reduction focuses only on the inference stage (e.g., condensation, pruning, and distillation) and requires full graph GNN training, which does not reduce training memory footprint. Second, many methods focus solely on the graph's structural aspect, ignoring the initial population feature-label distribution, resulting in a skewed post-reduction label distribution. Here, we propose a Feature-Label COnstrained graph Net collapse, FALCON, to address these limitations. Our three core contributions lie in (i) designing FALCON, a topology-aware graph reduction technique that preserves feature-label distribution by introducing a K-Means clustering with a novel dimension-normalized Euclidean distance; (ii) implementation of FALCON with other state-of-the-art (SOTA) memory reduction methods (i.e., mini batched GNN and quantization) for further memory reduction; (iii) extensive benchmarking and ablation studies against SOTA methods to evaluate FALCON memory reduction. Our comprehensive results show that FALCON can significantly collapse various public datasets (e.g., PPI and Flickr to as low as 34% of the total nodes) while keeping equal prediction quality across GNN models. Our FALCON code is available at <https://github.com/basiralab/FALCON>.Graph reduction, Affordable AI, Graph Neural Networks, Graph Topology, Memory efficiency § INTRODUCTIONThe advent of Graph Neural Networks (GNNs) has fostered a new neural network learning paradigm where inter-sample dependencies can be taken into account in addition to their individual feature. Consequently, this opens up neural networks to many new practical applications that involve graph datasets such as network neuroscience <cit.>, retail recommendations <cit.>, fraud detections <cit.>, and drug discovery <cit.>. However, scalability remains one of the main barriers in GNNs as most of these practical applications usually involve large and complex graph datasets. During our experiments, out-of-memory (OOM) errors were also very frequent when training GNN models (e.g., GCN), even on relatively smaller datasets (e.g., Flickr).Furthermore, most GNN models such as Graph Convolutional Network (GCN) <cit.>, Graph Attention Network (GAT) <cit.>, and Neural Message Passing Scheme (NPMS) <cit.> couple the feature aggregation with feature transformation on each layer. This coupling fosters data point interconnectivity, making mini-batching complex. Consequently, early GNN training is mostly done with full-batched gradient descent, which does not scale well <cit.>. While mini-batched stochastic gradient descent can still be done by including one-hop neighborhood nodes to the mini-batch for every GNN layer, this will result in exponential time complexity relative to the number of layers <cit.>.Related Works. Many recent works aim to make these GNN methods more scalable through clever mini-batching schemes considering node interconnectivity. One of the early mini-batching methods, Cluster-GCN <cit.>, uses graph clustering algorithms such as METIS <cit.> to partition the graph into subgraphs, formed by minimizing the inter-cluster edges while maximizing the within-cluster edges. Mini-batch can be composed of a combination of these clusters, and out-of-batch edges are neglected. GNN AutoScale <cit.> is an improvement of Cluster-GCN by using stale out-of-batch node embeddings (historical embeddings) to compensate for the discarded out-of-batch edges. Similarly, LMC-GNN <cit.> is also based on GNN AutoScale but utilizes a convex combination of historical embeddings and incomplete up-to-date messages to compensate for the out-of-batch edges. Interestingly, some recent works in this stream took a different path in constructing the mini-batch. SGC <cit.> and SIGN <cit.> simply decouple the feature aggregation and feature transformation phase. Under the assumption that the feature aggregation is fixed, SIGN and SGC pre-compute the aggregation and leave only feature transformation for MLP with independent data points.Another stream of work in memory-efficient GNNs is quantization methods, which store each layer's data in a low-precision format to reduce memory usage. One of the most recent works along this direction, EXACT <cit.> applies the activation compression method originally developed for Convolutional Neural Network, ActNN <cit.>.Lastly, a relatively under-explored stream and the main focus of this present work is graph reduction methods. Such methods aim to find a compressed representation of the training graph which serves as a memory-efficient alternative to the original graph. Since graph reduction methods only compacts the training graph, they are independent from the types of GNN models being used and are very versatile to be applied in most GNN training pipelines. Additionally, most graph reduction methods such as <cit.> and <cit.> can be considered as a pre-processing task which is only performed once at the beginning of the training pipeline.Some works in the graph reduction stream merge neighbouring nodes based on their label similarity (i.e., KL divergence) <cit.> while another utilizes graph coarsening algorithms <cit.> to reduce the training graph. In our recent work <cit.>, we experimented with various graph centrality measures to quantify individual nodes importance and drive the contraction. However, most of these existing graph reduction methods only consider the graph structural information and do not consider node features and labels, which implies that the original label and feature distribution might not be preserved. There are other methods relating to graph reduction, such as condensation <cit.> and pruning <cit.>. However, these methods are intended for the inference stage and do not reduce training memory usage as the reduced graph is learned through full graph GNN training.In this paper, we propose a novel graph-collapsing technique based on both the graph topological characteristics and the feature and label distribution across nodes. We name our method FALCON: FeAture-Label COnstrained graph Net collapse and further improve over <cit.>, our original work where we utilize graph centrality measure to rank each node based on its topological importance. However, improving from a pure topological approach, we also consider the nodes feature and label diversity to minimize any biases or imbalance introduced by the graph collapse. These feature embeddings and labels are used to guide the graph collapse to preserve the original feature and label diversity. Additionally, we also apply FALCON to the four-stage memory-efficient GNN framework introduced by our seed paper <cit.> along with various other state-of-the-art (SOTA) methods for mini-batching and quantization as visualized in Figure <ref>. However, since we are focusing on GNN training, we only work with the first three stages (i.e., graph reduction, mini-batch construction, and layer compression). The compelling apects of our work are summarized as follows:* FALCON presents the first GNN training-focused graph reduction method that utilizes both graph topological information and nodes feature and label data. FALCON achieves this through diversity-constrained graph centrality collapse which preserves feature-label distribution.* FALCON has been extensively benchmarked on four public datasets from various fields, and we found all benchmark datasets to be highly collapsible, indicating its potential on wide range of datasets.* Being part of a pre-processing stage, FACLON is highly compatible with many existing memory-efficient GNN frameworks. Furthermore, we propose a memory-efficient GNN framework incorporating FALCON and show the holistic impact of combining FALCON with other SOTA methods. § BACKGROUND Problem statement. Let 𝒢 = {𝒱,ℰ,X,Y} denote a graph with 𝒱 as the set of nodes or samples, ℰ as the set of edges, X ∈ℝ^|𝒱| × F as the feature matrix, and Y ∈ℝ^|𝒱| × L as the label matrix. The objective of FALCON is to find a collapsed version of 𝒢, namely 𝒢' = {𝒱',ℰ',X',Y'} such that 𝒢' fits within the defined memory budget Ψ and GNN model f_θ trained on 𝒢 will be approximately similar to the one trained on 𝒢'. Formally, this is written as Equation <ref>. Here, we mainly work with node classification tasks with unweighted and undirected graphs.0.9!𝐅𝐀𝐋𝐂𝐎𝐍(𝒢) = 𝒢' s.t.f_θ(𝒢) ≈ f_θ(𝒢'), |f_θ(𝒢')| < Ψ < |f_θ(𝒢)| Where |f_θ(𝒢)| denotes the required memory budget to learn the mapping function f_θ on 𝒢.Graph Convolution Network Model. One of the earliest and least complex GNN models, GCN <cit.> has layers composed of two phases, namely feature aggregation and feature transformation. While GCN feature aggregation uses a normalized adjacency matrix à and does not involve any learnable parameters, it is coupled with the feature transformation phase due to the non-linear activation (e.g., ReLU) between each layer. Equation <ref> shows a two-layer GCN network equation as follows:Ŷ = σ(Ãσ(ÃX^0θ^1)θ^2) 0.9!à = D^-1/2 (A+I) D^-1/2 D = 1 + ∑_j A_ij Where A denotes the adjacency matrix, θ^l denotes the weights of the lth layer, and D denotes the diagonal matrix composed of node degrees.This means that mini-batched training using GCN is not straightforward. We must include an additional one-hop neighborhood for each GCN layer, leading to an exponential time complexity w.r.t the number of layers. Hence, GPU memory footprint is a substantial obstacle in GCN training.§ PROPOSED FALCONThis section details FALCON[<https://github.com/basiralab/FALCON>]. As shown in Figure <ref>, FALCON computation consists of 2 paths: centrality computation to encode topological awareness, and preserving the feature-label distribution by normalized clustering.Centrality-Based Graph Collapse. Centrality measures are classical yet powerful tools to quantify node importance in the graph structure. Nodes deemed important usually mean they serve as a hub for others to communicate. Hence, centrality is usually defined based on factors relating to this "hubness" (e.g., the number of connections and shortest distance to every other node). Based on this, <cit.> contracts the graph starting from the least important node by merging it with its adjacent hub (the most important node within its neighborhood). In other words, they merge the weakest nodes iteratively with their strongest neighbors until the defined node budget Ψ is fulfilled. Edge collapse is done for each merging to maintain the graph connectivity and avoid fragmenting the graph as depicted in Figure <ref>. Although pure centrality-based collapse does not guarantee the preservation of the original feature-label distribution, it remains as a promising metric to quantify node importance. Hence, we experimented with the following measures:* Degree Centrality (DC). DC is the simplest centrality based only on the node's degree (a higher degree incurs higher centrality). DC can be computed using DC(v_i)=∑_i ≠ j A_ij with A as the graph adjacency matrix. Computation of DC is relatively fast as we only need to go through the set of edges ℰ in linear time O(|ℰ|). * Betweenness Centrality (BC). BC measures the shortest paths between any pair of nodes that crosses the node (more crossing incurs higher centrality). The computation of exact BC can be quite expensive with O(|𝒱|.|ℰ|) time complexity <cit.>. Fortunately, BC can be approximated by random sampling a small set of nodes 𝒱_k and computing BC based on 𝒱_k. This leads to an upper bound time complexity of O(|𝒱_k|.|ℰ|) <cit.>. Formally, BC is defined as BC(v_i)=∑_i ≠ j ≠ kσ_jk(v_i)/σ_jk where σ_jk(v_i) is the number of closest paths between v_j and v_k passing through v_i and σ_jk the total number of closest paths between v_j and v_k. * Closeness Centrality (CC). CC measures how close a node is to every other node in the graph. This will naturally be computationally costly as it involves computing the shortest distance between all pairs of nodes. For sparse graphs, CC has O(|𝒱|.|ℰ|) time complexity <cit.>, making it one of the slowest centrality to compute. Formally, CC is defined as CC(v_i) = |𝒱|-1/∑_i ≠ j l_ij where l_ij is the closest distance between v_i and v_j.* PageRank Centrality (PR). Originally developed by Google <cit.> to rank websites for their search engine, PR measures a node centrality by taking into account the quality of the neighboring nodes and the probability of a random walker from that node visiting a node of interest. PR centrality is formally defined as PR(v_i)=(I-α A D^-1)^-11_i, with D as diagonal matrix of node degrees and α is a damping factor usually set to 0.85. PR can be computed with O(|𝒱|+|ℰ|) time complexity using the power iteration method. * Eigenvector Centrality (EC). An improvement over the simple degree centrality, EC takes into account both the degree of a node and the quality of the neighboring nodes (higher quality connection incurs higher centrality) <cit.>. As the name suggests, computation of eigenvector centrality involves eigenvalue computations, and the centrality is formally defined asEC(v_i) = 1/λ_1∑_j A_ijx_j, with λ_1 as the highest eigenvalue and x_j as node v_j's corresponding eigenvector. EC can be computed using the power iteration method with O(|𝒱|+|ℰ|) <cit.> time complexity. Feature-Label Constraint. While collapsing a graph purely based on centrality can theoretically maintain the graph's most important nodes, it remains agnostic to the distribution of node features and labels. Hence, there is no guarantee that the original feature and label distributions of the uncollapsed graph are preserved. In the worst case, we can have a certain set of training labels (or features) completely obliterated by the collapse if all of the nodes with the corresponding labels have weak centrality measures. The collapse depicted in annotation A of Figure <ref> visualizes an example scenario where we have a set of labels entirely removed from the collapsed graph due to consisting only of weaker centrality nodes. Hence, to preserve the original distribution, we propose a clustering-based scheme that utilizes K-Means clustering and a modified Euclidean distance metric to cluster the nodes based on their similar features and labels. The main idea is to cluster the nodes such that each cluster represents a distinct group of nodes based on their features and labels. Afterward, we can distribute the “collapse" among these groups (e.g., suppose we are collapsing 50% nodes and we have three feature-label clusters, we can collapse each cluster proportionally by ∼50%). Provided we have enough clusters to capture the original feature-label distribution, the collapsed nodes' relative distribution will be similar to the original graph. The collapse depicted in annotation B of Figure <ref> visualizes this feature-label distribution preserving collapse. Dimension Normalized Euclidean Distance. In the case of multi-label classification, the most straightforward way to cluster the nodes based on both similar features X ∈ℝ^|𝒱| × F and labels Y ∈ℝ^|𝒱| × L is by concatenating both and performing the clustering based on [X, Y]. Similarly, with multi-class classification, we can perform one hot encoding on the labels and concatenate it with the features. However, the concern with such approach is that the prioritization between features and labels is going to be affected by the dimensions of the features and labels (i.e., higher feature dimension will skew the Euclidean distance metric to prioritize features over labels, and vice versa, as shown in Equation <ref>)..82!D(v_i,v_j)^2 = ∑_f^F(x_if - x_jf)^2 + ∑_l^L(y_il - y_jl)^2 Where v_i denotes node i, x_if denotes the fth feature of node i , and y_il denotes the lth label of node i.Therefore, we further introduce a normalization scheme on the Euclidean distance with an added hyperparameter to denote the prioritization, γ∈ [0,1] as shown in Equation <ref> and <ref>:.88!D(v_i,v_j)^2 = α∑_f^F(x_if-x_jf)^2 + β∑_l^L(y_il-y_jl)^2 .7!α = γmax(D,L)/Dβ = (1-γ)max(D,L)/L Here, γ = 0 means that we are only considering the labels, while γ = 1 means only features are being considered. We can use γ = 0.5 to consider both features and labels equally. Note that both features and labels need to be scale-normalized to fall between 0 to 1 inclusive.We can rewrite Equation <ref> by including the α and β terms as part of the features x and labels y as shown in Equation <ref>. This means we can apply element-wise multiplication to the features and labels with √(α) and √(β), respectively. Then we can use any clustering algorithm such as K-Means based on our proposed node-to-node distance. Formally, we define M ∈ℝ^|𝒱| × (F + L) as the normalized feature label concatenation, which is computed using Equation <ref>..9!D(v_i,v_j)^2 = ∑_f^F(√(α)x_if-√(α)x_jf)^2 +∑_l^L(√(β)y_il-√(β)y_jl)^2 0.5!M = (√(α)⊙ X)⊕(√(β)⊙ Y) Next, we cluster the nodes based on M to get the relatively distinct node groups based on their features and labels which we can use to balance the cluster proportions. We then collapse each cluster using our topology-aware collapse method as illustrated in Figure <ref>. The resulting collapsed graph naturally maintains the original feature and label distribution. FALCON algorithm is given in appendix A.FALCON Algorithm. Finally, we present the pseudocode for the proposed FALCON (Algorithm <ref>). The parameters Ψ, η, γ, and ζ denote the training node budget, number of feature-label clusters, feature-label prioritization constant, and centrality metric used, respectively.§ MEMORY-EFFICIENT GNN FRAMEWORK This section details the choice of the mini-batch construction and layer compression stages in the 4-stage memory-efficient GNN framework (Figure <ref>). This framework is used to further minimize GPU memory footprint as much as possible in addition to FALCON's graph reduction and to demonstrate the holistic potential of combining FALCON with other SOTA memory-efficient GNN methods. As shown in Figure <ref>, the memory-efficient framework comprises FALCON for the graph reduction, SIGN model for the mini-batching construction, and activation quantization for the layer-wise compression. However, application of FALCON is not limited only to SIGN model as shown in our experiments later on where we apply FALCON to other SOTA GNN models.Mini-batch construction stage. Here, many recent SOTA works focus on mini-batching in coupled feature aggregation and transformation models (e.g., GCN). However, most of these methods have limitations, such as the removal of out-of-batch edges (e.g., Cluster-GCN <cit.>) or involve expensive CPU to GPU data transfer for out-of-batch edges (e.g., GNN AutoScale <cit.> and LMC-GNN <cit.>). The roots of these limitations lie in the coupling of feature aggregation and transformation phases. Taking a different direction, SIGN <cit.> and SGC <cit.> decouple the feature aggregation from the feature transformation phase under the assumption that non-linearity between layers is not essential in GCNs. SGC pioneers this idea, and SIGN extend it by providing a general decoupled GNN model through the concatenation of various aggregated features and the use of general MLP to replace the single-layer feature transformation phase in SGC. A simplified, yet more expressive version of SIGN is introduced by <cit.> and is shown in Equation <ref> and <ref> (the original SIGN implementation is given in appendix B):.3!Ŷ = 𝐌𝐋𝐏(Z, θ) Z = [IX^0, ÃX^0, ..., Ã^nX^0] Where Z is a concatenation of various k-hop aggregated features (multiplication of power matrices Ã^k and X^0).Since the feature transformation phase uses independent data points, mini-batching can be done trivially, which circumvents the aforementioned limitations. The generalization to multiple aggregations and the use of MLP for the transformation phase also puts it ahead of SGC. On that account, we further incorporate SIGN in our framework for the mini-batch construction.Layer Compression Stage. Moving to the layer compression stage, we quantize the data stored in each layer, further minimizing the GPU memory footprint. Most layers store both weight and activation, usually done in full 32-bit precision. Activation, however, tends to be the major memory usage contributor as it depends on the number of samples. Consequently, ActNN <cit.> pioneers the use of activation quantization on convolutional neural network (CNN) which is then introduced to several GNN methods (e.g., GCN, GAT, and Cluster-GCN) by EXACT <cit.>.Here, we extend our framework using the 2-bit activation quantization by ActNN on the feature transformation MLP layers, and we use the term QSIGN to denote the activation quantized SIGN model. The activation quantization by <cit.> is defined as quant(𝐡_𝐢𝐠)=𝐡_𝐢𝐠-Z_ig/R_ig.B, where h_ig denotes the embeddings of node v_i belonging to group g (if quantization is done on multiple groups of embeddings), Z_ig=min(𝐡_𝐢𝐠), R_ig=max(𝐡_𝐢𝐠)-min(𝐡_𝐢𝐠), and B=2^b-1, with b denoting the number of compression bits which in our case is 2. Dequantization is the inverse of the quantization and is defined as dequant(𝐡_𝐢𝐠)=Z_ig+quant(𝐡_𝐢𝐠).R_ig/B.§ EXPERIMENTS Evaluation Datasets. We evaluate FALCON on four sufficiently large public datasets (PPI <cit.>, Flickr <cit.>, MedMNIST Organ-S <cit.>, and MedMNIST Organ-C <cit.>) to highlight the GPU memory impact. Organ-S and Organ-C datasets were originally collected for an image classification task, with each sample having 28 by 28 single-channel pixels. To represent it as a graph, we use the vectorized pixel intensities as the node feature and construct the edges based on the features' cosine similarity (to ensure the sparsity of the graph, we set the similarity threshold at ∼1.2 million edges). The dataset properties for the benchmark are shown in Table <ref> along with the hyperparameters. Here, γ and number of hops (the number of aggregation hops for SIGN) are determined using grid search based on validation accuracy with a range of 0.4 to 0.6 and 1 to 6, respectively. On the other hand, we set the number of layers and hidden channels sufficiently large to mimic complex model training and to highlight the GPU memory impact of the training. Note that all datasets are trained inductively, and all results are based on a 95% confidence interval of 5 runs. All experiments are done on a Linux machine with a Ryzen 5 5600H CPU, 16 GB DDR4 memory, and a 4GB NVIDIA RTX 3050ti GPU. Benchmark Methods. We train various SOTA models (e.g., GCN <cit.>, Cluster-GCN <cit.>, GNN AutoScale <cit.>, SIGN <cit.>) along with their FALCON collapsed variant on several public datasets. For each FALCON variant, we also trained multiple models with different centrality type (e.g., degree, eigenvector, betweenness, closeness, PageRank). Although we are mainly interested in the GPU memory saving contributed by the FALCON collapse, we also measure the epoch time and various performance metric (i.e., for multi-label tasks, we use accuracy, micro F1, micro sensitivity, and micro specificity, while for multi-class tasks, we only use accuracy and micro specificity as the remaining two yields the same result as accuracy). Additionally, we also measure the impact of feature-label distribution preservation by comparing the distribution error and performance of FALCON against pure centrality-based collapse <cit.>. Node Budget Convergence. First, we evaluate the impact of FALCON while varying training node budgets on its validation accuracy. We train FALCON-QSIGN (EC) models with varying training node budgets and plot the validation accuracy as depicted in Figure <ref>. Our experiment shows that convergence is achieved with approximately 15000 training nodes for PPI and Flickr and 8000 training nodes for Organ-C and Organ-S. Being collapsible to as low as ∼34%(PPI and Flickr) and ∼58% (Organ-C and Organ-S) of the original graph while maintaining their predictive performance implies that these datasets are highly collapsible.Prediction Quality Comparison. Next, we benchmark both the training resource usage and prediction results on test set of FALCON on various GNN models. Table <ref> shows the prediction quality of our benchmark and suggests that across all benchmark models, FALCON can achieve very similar performance to their uncollapsed variant while using only fractions of the complete training graph. The results also suggest the choice of centrality measure can significantly impact prediction quality on some datasets (e.g., PPI). This implies that the choice of centrality needs to be treated as a hyperparameter. Nonetheless, our benchmark suggests that EC performs the best overall as it performs very closely to a complete graph and is also scalable to larger graphs due to its linear time complexity <cit.>. Additionally, our framework (FALCON-QSIGN) produces similar prediction results to other benchmark models.Computational Resource Comparison. The main benefit of our proposed FALCON and FALCON-QSIGN framework is the training computational resource efficiency evidenced in Table <ref>. This result shows that FALCON can significantly reduce GPU memory usage and epoch time during training across all models. Furthermore, FALCON-QSIGN framework has the lowest GPU memory usage and 2nd lowest epoch time (excluding GCN) across all benchmarks. Being outperformed by SIGN in epoch time shows the quantization overhead required to encode and decode the activation.Feature-Label Constraint Impact. Here, we show the importance of preserving feature and label distribution in the post-collapsed graph instead of doing pure centrality-based collapse as in <cit.>. The post-collapse macro average label distribution error can be computed using L_err = 1/L∑_l^L(|n_l/n-N_l/N|), where n_l/n denotes the node ratio with label l in the collapsed graph, while N_l/N denotes the same label ratio, but for the original graph. Table <ref> shows this label distribution error, which suggests that FALCON preserves label distribution best across all benchmarks. Furthermore, while label distribution error is not the main evaluation metric we use, this result is also reflected in the graph prediction results as suggested by Table <ref>, which shows that FALCON has better overall prediction performance compared to its feature label agnostic counterparts. Graph Coarsening Comparison. Finally, we benchmark FALCON against the existing SOTA graph reduction method using coarsening algorithm <cit.>. However, as we encountered memory errors when coarsening our primary datasets using the available source code, we primarily use the publicly available Cora and PubMed datasets <cit.> as done by <cit.>. Here, the hyperparameter is set in a similar way to the main benchmark, and we use a 3-layer network with 1536 hidden layers, γ=0.5, 100 FALCON clusters, dropouts of 0.5, and a learning rate of 0.0005 for both Cora and PubMed dataset. Table <ref> depicts the prediction results benchmark, which shows that FALCON with the optimal centrality measure (i.e., PR in this case) is able to outperform the graph coarsening method. Moreover, we also compare the label distribution error of the post-collapsed graph in Table <ref>, which shows that our method significantly outperforms <cit.> in preserving the original feature and label distribution. Initially, we also intended to benchmark against the graph reduction method by <cit.>. However, their source code is not publicly available at the time of writing.§ CONCLUSION We present FALCON, a topology-aware graph-collapsing scheme that preserves the original graph feature and label distributions for memory-efficient GNN training. Our benchmark shows that FALCON significantly reduces the training GPU memory footprint (up to 66% reduction on PPI) and epoch time (up to 72% reduction on Flickr) across different GNN models and four public datasets while maintaining similar performance to the original graph. Additionally, we introduce a memory-efficient GNN framework that combines various SOTA methods (FALCON-QSIGN). Experimental result showed that FALCON-QSIGN has the lowest GPU memory footprint compared to other benchmarked methods with a phenomenal 97% reduction compared to GCN on the PPI dataset, and a lowest reduction of 43% compared to SIGN on the MedMNIST Organ-C dataset.§ ORIGINAL SIGN IMPLEMENTATIONThe original implementation of SIGN adds a feature transformation layer for each of the aggregation as shown in Equation <ref> and <ref>:.3!Ŷ = 𝐌𝐋𝐏(Z, θ) .6!Z = σ([IX^0θ_0, ÃX^0θ_1, ..., Ã^nX^0θ_n]) Where X denotes the original node features, Z denotes the concatenated aggregated features, à denotes the normalized adjacency matrix, θ denotes learnable weights, and Ŷ denotes the prediction.§ IMPACT OF NODE BUDGETS TO GPU MEMORY USAGE AND EPOCH TIME In this section, we show the impact of the FALCON node budget on the GPU memory footprint during training. Figure <ref> shows the GPU memory usage of the FALCON-CQSIGN (EC) framework. Similarly, for epoch time, we plot the epoch time against the training node budget in Figure <ref>. § COMPUTATIONAL TIME OF FALCONHere, we compare the computational time required to collapse the training graphs using FALCON. This computational time is shown in Table <ref> and consists of the whole collapse procedure (centrality measure computation, K-means clustering, and the graph collapsing process). Additionally, graph coarsening <cit.> is compared with FALCON, which suggests that most FALCON methods are faster, excluding the CC variant.§ MORE COMPARISON ON FEATURE-LABEL CONSTRAINT IMPACTThis section includes more comparisons between FALCON and its feature-label agnostic counterparts (Table <ref>). Our experiment suggests that the preservation of the original feature and label distributions generally tends to be beneficial in terms of prediction quality across various GNN models as well as various datasets.IEEEtran | http://arxiv.org/abs/2312.16542v1 | {
"authors": [
"Christopher Adnel",
"Islem Rekik"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231227115317",
"title": "FALCON: Feature-Label Constrained Graph Net Collapse for Memory Efficient GNNs"
} |
[ [ January 14, 2024 ==================== Serrations are commonly employed to mitigate the turbulent boundary layer trailing-edge noise. However, significant discrepancies persist between model predictions and experimental observations. In this paper, we show that this results from the frozen turbulence assumption. A fully-developed turbulent boundary layer over a flat plate is first simulated using the large eddy simulation (LES) method, with the turbulence at the inlet generated using the digital filter method (DFM). The space-time correlations and spectral characteristics of wall pressure fluctuations are examined. The simulation results demonstrate that the coherence function decays in the streamwise direction, deviating from the constant value of unity assumed in the frozen turbulence assumption. By considering an exponential decay function, we relax the frozen turbulence assumption and develop a prediction model that accounts for the intrinsic non-frozen nature of turbulent boundary layers. To facilitate a direct comparison with frozen models, a correction coefficient is introduced to account for the influence of non-frozen turbulence. The comparison between the new and original models demonstrates that the new model predicts lower noise reductions, aligning more closely with the experimental observations. The physical mechanism underlying the overprediction of the noise model assuming frozen turbulence is discussed. The overprediction is due to the decoherence of the phase variation along the serrated trailing edge. Consequently, the ratio of the serration amplitude to the streamwise frequency-dependent correlation length is identified as a crucial parameter in determining the correct prediction of far-field noise.§ INTRODUCTIONTrailing-edge (TE) noise is a major concern in various industrial applications, including wind turbines, cooling fans, and turbo-machinery. As a turbulent boundary layer convects past a trailing edge, pressure fluctuations beneath the boundary layer scatter into sound, leading to noise emissions. Inspired by the silent flight of owls <cit.>, serrated trailing edges have emerged as a promising approach to reduce this noise.Extensive research has been conducted to investigate the effectiveness of serrated trailing edges in reducing noise. <cit.> first developed an analytical model to predict the scattered noise from serrated trailing edges and found that sharp sawtooth serrations are effective in suppressing TE noise. However, the experimental studies conducted by <cit.> showed that Howe's model significantly overpredicted the noise reduction due to TE serrations. Based on a Fourier expansion technique, <cit.> extended Amiet's theory <cit.> to sawtooth trailing edges. Lyu's model yielded a more realistic noise reduction prediction compared to Howe's model. It was shown that the mechanism underlying the noise reduction was the destructive interference effects of the scattered pressure due to the presence of serrations. Two parameters were identified to be important in effectively reducing TE noise. Recently, <cit.> presented an analytical model based on the Wiener-Hopf method and applied this model to five test-case TE geometries. Furthermore, <cit.> simplified Ayton's model by approximating the infinite interval involving two infinite sums, and the resulting model consumes much less time when evaluated. However, as a semi-infinite flat plate was assumed, the solution was strictly two-dimensional (2D) (the predicted sound pressure decays as 1/√(r) where r is the cylindrical radial distance of the observer), rendering it inapplicable to applications involving rotating blades <cit.>. In most practical applications, however, blades are in a state of rotation during their operation, such as the propellers of drones. <cit.> investigated the broadband noise from a small remotely piloted aircrafts (RPAs) propeller withsawtooth serrations using the first-order approximation of Lyu's model. <cit.> conducted a theoretical investigation on the noise emitted from three kinds of rotating serrated blades using the second-order approximation. The three-dimensional (3D) directivity patterns of an isolated flat plate were found to be important for the far-field noise characteristics of a rotating blade. Despite the valuable insights provided by theoretical models, indispensable discrepancies still exist between the latest analytical predictions and experimental results <cit.>. In a recent study, <cit.> conducted anechoic wind tunnel experiments to investigate the effect of serration shape and flexibility on TE noise. Their findings demonstrated that the new analytical model proposed by <cit.> still notably overpredicted the noise reduction capacity achieved by serrations. Given that all these analytical prediction models rely on the statistics of the wall pressure fluctuations as inputs, an accurate characterization of these fluctuations is crucial to an accurate noise prediction.Comprehensive reviews on the features of the wall pressure fluctuations can be found in the works of <cit.> and <cit.>. In general, the temporal and spatial characteristics of the wall pressure fluctuations on a flat plate can be described in terms of the wavenumber-frequency spectrum, which exhibits two distinct regions. The first region is called the acoustic domain <cit.> where the phase speed is equal to or greater than the speed of sound, enabling efficient radiation to the far field. The second region, referred to as the convected domain, comprises wave components that travel at speeds slower than the speed of sound. Pressure fluctuations in the convected domain exhibit significantly higher magnitudes compared to those observed in the acoustic domain and are related to the scattering process of the TE noise. In practice, semi-empirical wall pressure spectrum models are commonly used in analytical noise predictions, such as the Corcos model <cit.>, the Chase model <cit.>, and the Goody model <cit.>. Semi-empirical models are usually empirically formulated according to certain scaling laws. <cit.> conducted a comparison of the frequency spectra calculated using nine semi-empirical models and found that the Goody model could provide the best overall estimation. Recently, the TNO model has shown promise in obtaining the surface pressure wavenumber-frequency spectrum <cit.> and could potentially improve noise prediction performance compared to other empirical models <cit.>. One of the most important assumptions made in modelling the turbulent flow is Taylor's hypothesis of frozen turbulence <cit.>. Taylor hypothesized that the spatial patterns of turbulent motions are carried past a fixed point at the convection velocity without changing significantly. The frozen turbulence assumption depicted a simple scenario that could provide significant convenience in developing analytical models. However, the applicability of this assumption was open to some debate. <cit.> has shown that this hypothesis is not applicable in cases of high shear flows, such as turbulent boundary layers and the mixing region of a jet. The large-scale shear flows induce the distortions of small eddies as they are carried downstream <cit.>. Subsequently, numerous studies have focused on assessing the validity and improving Taylor's hypothesis <cit.>. <cit.> pointed out that when intensity is high, different turbulent spectral components appear to travel at different speeds. Furthermore, under the frozen turbulence assumption, the energy spectrum obtained in a frame of reference moving with the convection velocity contains only components of zero frequency. However, the experimental results of <cit.> showed that the energy was spread over a considerable band of frequencies for the shear flows. In the region of high shear stress within a turbulent channel flow, <cit.> showed that thephase velocity of the modes with long wavelengths was higher than the local mean velocity. They also proposed a method to determine the convection velocity that relies solely on the spectral information in the temporal or spatial direction. Considering the frozen turbulence assumption implied a first-order approximation, <cit.> developed an elliptic model based on a second-order approximation. Two characteristic velocities were utilized in this model, i.e., a convection velocity and a velocity that characterizes the distortion of flow patterns. The elliptic model can be used to reconstruct space-time correlations from temporal correlations and has been validated in turbulent channel flows <cit.>, turbulent boundary layers <cit.>, and turbulent Rayleigh-Bénard convection (RBC) <cit.>.For the prediction of TE noise, most previous analytical models adopted the frozen turbulence assumption to facilitate a quick estimation, which may be a potential contributor to the discrepancies between models and experiments. Recently, several experimental and numerical studies <cit.> have called into question the use of the frozen turbulence assumption. Therefore, it is necessary to explore to which extent the frozen turbulence assumption approximates the real turbulent statistics and to develop methods for incorporating the non-frozen effect in noise prediction models.This paper is structured as follows. Section <ref> shows a statistical description of wall pressure fluctuations. Section <ref> describes the numerical setup employed to simulate a fully-developed turbulent boundary layer. The correlation and spectral features of wall pressure fluctuations are examined. Subsequently, in <ref>, a new model that accounts for the non-frozen effect is proposed, and the corresponding prediction results are presented. Section <ref> elucidates the physical mechanism behind the noise reduction when non-frozen turbulence is taken into consideration. The final section concludes the present paper and lists our future work. § THE STATISTICAL DESCRIPTION OF THE WALL PRESSURE FLUCTUATIONS In this paper, we shall consider a turbulent boundary layer that develops on a flat plate under a zero mean pressure gradient. In TE noise modelling, the statistical spectrum of the wall pressure fluctuations beneath a turbulent boundary layer is often used as an input. We define some of the key quantities in this section. The space-time correlation of the wall pressure fluctuations p^'(x,t) at position x=(x,z) at time t is defined byQ_pp(x,t;ξ,τ) = ⟨ p^'(x,t)p^'(x+ξ,t+τ)⟩,where ξ=(ξ,η), ξ and η are the spatial separations in the streamwise and spanwise directions respectively, and τ is the time delay. As the turbulent boundary layer develops slowly in the streamwise direction, the flow field may be regarded as homogeneous in the directions parallel to the wall and stationary in time within the scales of interest. Thus, we have Q_pp(x,t;ξ,τ)≈Q_pp(ξ,τ). The correlation coefficient is then defined byR_pp(ξ,η,τ) = Q_pp(ξ,η,τ)Q_pp(0,0,0).The streamwise and spanwise integral lengths can be defined asΛ_x = ∫_0^∞R_pp(ξ,0,0)dξ, Λ_z = ∫_0^∞R_pp(0,η,0)dη. The spectral density of wall pressure fluctuations can be obtained by performing the Fourier transform of the space-time correlation. In the frequency domain, the single-point spectrum ϕ(ω) is expressed asϕ(ω)=12∫_-∞^∞Q_pp(0,0,τ)e^-iωτdτ,where ω=2 f is the angular frequency and f is the frequency. Similarly, the cross-spectral density is defined byG_pp(ξ,η,ω)=12∫_-∞^∞Q_pp(ξ,η,τ)e^-iωτdτ. Making use of the single-point spectrum and the cross-spectral density, the coherence function can be defined asγ^2(ξ,η,ω)=|G_pp(ξ,η,ω)|^2ϕ(ω)^2.In (<ref>) and (<ref>), we have defined the streamwise and spanwise integral lengths based on the correlation coefficients, which are independent of frequency. However, it is known that the spatial correlation of the pressure fluctuations varies with frequency. Therefore, we introduce the frequency-dependent correlation lengths defined asl_x(ω)= ∫_0^∞γ(ξ,0,ω)dξ, l_z(ω)= ∫_0^∞γ(0,η,ω)dη.As will be seen, the characteristics of l_x,z differ significantly from those of Λ_x,z and they have significant implications in noise predictions.To obtain the wavenumber-frequency spectrum of the wall pressure fluctuations, we perform spatial Fourier transforms on the cross-spectral density, resulting in the definition of the wavenumber-frequency spectrum,Π(k_1,k_2,ω)=1(2)^2∫_-∞^∞∫_-∞^∞G_pp(ξ,η,ω)e^i(k_1ξ+k_2η)dξdη,where k_1 and k_2 are wavenumbers in the streamwise and spanwise directions, respectively.The wavenumber-frequency spectrum describes the spectral distribution of energy in wall pressure fluctuations and serves as a key input in TE noise prediction models. The highest levels of pressure fluctuations typically occur within a specific region centered around k_1=ω/U_c, k_2=0, where U_c is the convection velocity. This region is the so-called convective ridge. The idea that slowly distorting eddies are convected downstream by the mean flow at a fixed velocity is useful in the study of turbulent shear flows, and is particularly important in the research of aerodynamic noise <cit.>. Various approaches exist for defining the convection velocity <cit.>, and a comprehensive review of the convection velocity datasets in turbulent shear flows was conducted by <cit.>. In general, the convection velocity should not be treated as a constant value. This is because eddies of different sizes can convect at different velocities, and so do the eddies with different time scales. Therefore, in general, the convection velocity can be expressed as a function of time delay τ and streamwise separation ξ, or as a function of frequency ω and streamwise wavenumber k_1 in the spectral domain.Obtaining a well-resolved space-time flow field database is often challenging or impractical in real-world scenarios. As a result, it becomes necessary to reconstruct the wavenumber-frequency spectrum from either the space or time datasets based on the statistical characteristics of the turbulent boundary layer. In 1938, <cit.> proposed the well-known hypothesis that turbulent eddies convect uniformly and unchangingly past a fixed point as if the spatial patterns of the flow field are “frozen". Under this frozen turbulence assumption, the correlation function satisfies <cit.>Q_pp(ξ,η,τ)=Q_pp(ξ-U_cτ,η,0). The streamwise coherence function can be readily found as a constant from (<ref>), i.e.γ(ξ,0,ω)=1. This implies that the eddy patterns exhibit perfect coherence in the streamwise direction at all frequencies as they convect downstream. The frozen turbulence assumption has been widely employed due to its simplicity in modelling the wavenumber-frequency spectrum. However, in a real turbulent boundary layer, the eddies undergo distortions caused by the mean shear <cit.>. This implies that the streamwise coherence would decay as ξ increases. In such cases, assuming a constant streamwise coherence function may introduce large errors when used in noise prediction models. Therefore, gaining a more comprehensive understanding of the spatial coherence of wall pressure fluctuations, especially the frequency-dependent correlation lengths, is crucial. In the subsequent section, a numerical investigation will be conducted to examine in detail the characteristics of wall pressure fluctuations.§ NUMERICAL SIMULATION OF A TURBULENT BOUNDARY LAYER §.§ Numerical set-up To investigate the space-time correlations and spectral characteristics of wall pressure fluctuations, we use the wall-resolved LES method to simulate a fully-developed turbulent boundary layer over a flat plate. Considering that in many applications where TE noise is important the Mach number is relatively low, we choose to perform an incompressible simulation. Compared to the direct numerical simulation (DNS) method, LES requires less number of grid points for wall-resolved simulations <cit.>, leading to less computational resource demands. The LES method solves the spatially-filtered Navier-Stokes equations using a subgrid-scale (SGS) model. For incompressible flows, the filtered momentum and continuity equations can be expressed as∂u̅_i∂ t+∂∂ x_j(u̅_iu̅_j)=-1ρ_0∂p̅∂ x_i+ν∂^2u̅_i∂ x_j∂ x_j-∂τ_ij∂ x_j, i=1,2,3, ∂u̅_j∂ x_j=0,where the overbar denotes filtered variables with a filter width Δ, t is the time, u_i is the velocity component in the x_i-direction (also denoted as u, v, or w), ρ_0 is the density, p is the pressure, and ν is the kinematic viscosity. The contributions from sub-grid scales are represented through the SGS stresses τ_ij=u_iu_j-u̅_iu̅_j, which need to be modeled. Following the eddy-viscosity assumption, the SGS stress can be modelled asτ_ij-13δ_ijτ_kk=-2ν_TS̅_ij,where δ_ij is Kronecker's delta, and S̅_ij = (∂u̅_i/∂ x_j+∂u̅_j/∂ x_i)/2 is the large-scale strain-rate tensor. The SGS viscosity ν_T can be calculated using various models. In this study, the wall-adapting local eddy-viscosity model (WALE) <cit.> is used due to its ability to account for the wall effect on the turbulent structure. The value of ν_T is obtained asν_T =C_w^2Δ^2(S_ij^dS_ij^d)^3/2(S̅_ijS̅_ij)^5/2+(S_ij^dS_ij^d)^5/4,where C_w is a constant coefficient and S_ij^d is the traceless symmetric part of the square of the velocity gradient tensor,S_ij^d=12(g̅_ij^2+g̅_ji^2)-13δ_ijg̅_kk^2.Here, g̅_ij=∂u̅_i/∂ x_j and g̅_ij^2=g̅_ikg̅_kj. For the turbulent inlet boundary condition, we use the synthetic turbulent inflow generator. The generator employs a 2D filter to produce spatially correlated 2D slices of data. The instantaneous velocity on the slice is computed asu_i=u̅_i+a_ijΨ_j,where Ψ_j denotes the filtered fluctuating velocity field and a_ij is the amplitude tensor, which is related to the Reynolds stresses tensor R_ij by a_ij = [ √(R_11) 0 0;R_21a_11√(R_22-a_21^2) 0;R_31a_11 R_32-a_22a_31a_22 √(R_33-a_31^2-a_32^2); ]. By applying spatial and temporal filters to the random array sequences, we can introduce the desired temporal and spatial correlations in the instantaneous velocity fluctuations. The spatial turbulent length scale is defined by the two-point correlation, which is given byL_i^j(x)=∫_0^∞u_i'(x)u_i'(x+e_jr)u_i'(x)u_i'(x)dr,where r is the spatial separation in the j-direction and u_i'(x) denotes the velocity fluctuations. The parameters required to generate a turbulent inlet condition using DFM include the profiles of the mean velocity, turbulent Reynolds stresses, and turbulent length scales. These parameters can be obtained through various approaches, such as a precursor DNS simulation, modelling from a Reynolds-averaged Navier-Stokes (RANS) computation, or measurements from experiments. In this work, the mean velocity and turbulent Reynolds stresses are obtained from the DNS data provided by <cit.>. The Reynolds number at the inlet, based on the momentum thickness θ and the free-stream velocity U_0, is set to 1410. Regarding the turbulent length scale L_u^x, a constant value may be prescribed. Following <cit.>, L_u^x is set to the boundary layer thickness scaled by a factor of 0.6 and other turbulent length scales can be prescribed based on L_u^x. Table <ref> shows the nine turbulent length scales used at the inlet with DFM, where δ_in denotes the boundary layer thickness at the inlet. The numerical simulation in this study is conducted using OpenFOAM-v2206. The computational domain, as illustrated in figure <ref>, has dimensions of 50δ_in× 3.3δ_in× 3δ_in in the streamwise (x), wall-normal (y), and spanwise (z) directions, respectively. The mesh cells are exponentially distributed along the y-axis and uniformly placed along the streamwise and spanwise directions. Table <ref> lists detailed parameters employed in this study. To ensure grid independence, both fine and coarse meshes are tested, and the results of the grid independence test can be found in Appendix <ref>. For the velocity boundary condition, a slip condition is used on the top wall, while a no-slip condition is imposed on the bottom wall. In the lateral direction, a periodic boundary condition is employed to simulate an infinite domain. At the outlet, the inletOutlet boundary condition is imposed. Regarding the pressure boundary conditions, all boundaries are set to zero-gradient except for the top boundary, where a fixed pressure is prescribed. §.§ Numerical results In this section, we present the simulation results of a spatially-developing turbulent boundary layer using the LES method. The turbulent statistics are obtained after the flow field reaches a statistically-stationary state. The LES data is used to show the spatial evolution of the flow structures, validate the flow statistics, and examine the statistical characteristics of wall pressure fluctuations.§.§.§ Flow field An instantaneous flow field is visualized using an isosurface of the Q-criterion, as shown in figure <ref>. The computational domain is sufficiently long in the streamwise direction, and the flow region from the inlet to 19δ_in downstream is selected for visualization. It can be seen that the unsteady flow structures generated at the inlet are not physical. But these unphysical structures quickly decay and the flow becomes more physical after around 10 δ_in. Similar phenomena can also be seen in figure <ref>, where instantaneous snapshots of the flow field are displayed. Figure <ref>(a) shows the sideview of the instantaneous streamwise velocity field in the x-y plane while figure <ref>(b) shows the overview in the x-z plane located at y=0.3δ_in. The visualization demonstrates that artificial turbulent structures are instigated at the inlet, preserved for a short distance downstream, and subsequently replaced by more physical structures.Flow statistics are obtained by averaging over the spanwise direction z and time t. Therefore, the streamwise velocity can be decomposed into u=U+u^', where U and u^' denote the mean velocity and the fluctuating velocity, respectively. The wall shear stress can be calculated as τ_w=μ(dU/dy)|_y=0, where μ is the dynamic viscosity. The friction velocity is defined as u_τ=√(τ_w/ρ_0), and the characteristic length is given by l_⋆=ν/u_τ. Therefore, the mean velocity and distance normal to the wall can be expressed in non-dimensional forms as U^+=U/u_τ and y^+=y/l_⋆, respectively. In the subsequent analysis, the streamwise location x=40δ_in is used as the reference position. At this location, the friction velocity is u_τ=0.041U_0, and the Reynolds number based on the momentum thickness is Re_θ=2053.Figure <ref> shows the distribution of the mean velocity and turbulent Reynolds stress components. The simulated mean velocity exhibits good agreement with those obtained by <cit.> using the dynamic Smagorinsky model, as shown in figure <ref>(a). In the near-wall region, the simulated mean velocity profile collapses well with the linear law. However, in the logarithmic region, both the simulation results of the present study and <cit.> slightly deviate from the log-law. From figure <ref>(b), we can see that the simulated velocity fluctuations and Reynolds shear stress are in good agreement with DNS results. Slight deviations from the DNS profile can be seen for the u^' +_rms profile in both the present study and the work of <cit.>, which might be attributed to the artificial inflow-boundary condition employed in these two works. Nevertheless, figure <ref> shows the present LES captures essential flow physics.§.§.§ Properties of wall pressure fluctuations In this section, we examine the correlations, spectral properties, and convection characteristics of wall pressure fluctuations.Figure <ref> shows the spatial correlations in the streamwise and spanwise directions, respectively. It can be seen that both correlations decay rapidly with increasing separation. However, the spanwise correlation remains positive throughout the shown range, while the streamwise correlation changes sign at ξ/δ^∗≈3.9, which aligns with the findings of <cit.>. The decay of the correlations may be improved by using a larger computational domain but should suffice for the present study. Figure <ref> presents the contour plot of the two-point spatial correlation. The overall pattern is similar to the observations reported by <cit.>, where the contours are nearly circular for small separations, indicating near isotropy of the field. However, as the separations increase, the contours elongate in the spanwise direction, indicating increasing anisotropy. This elongation is likely attributed to large-scale flow structures and implies that Λ_z is larger than Λ_x. In the streamwise direction, negative areas can be seen, consistent with the behavior depicted in figure <ref>(a).An important characteristic of non-frozen turbulence is the presence of elliptic patterns in the contours of the space-time correlation R_pp(ξ,0,τ), as illustrated in figure <ref>. In contrast, under the assumption of frozen turbulence, the contours degrade to parallel straight lines. As shown in figure <ref>, the concentration of contour lines into a narrow band suggests that the development of flow structures downstream includes both convection and decay. Figure <ref> presents the space-time correlations for various fixed streamwise separations as a function of time delay. It can be seen that the correlation peak decreases as the streamwise separation increases. This behavior indicates a decaying correlation between the pressure fluctuations as separation distance increases. This contrasts directly with a non-decaying correlation implied in the frozen turbulence assumption (see (<ref>)).Figure <ref> shows the power spectral density (PSD) of the pressure fluctuations obtained using Welch's method <cit.> and normalized by the dynamic pressure q_∞=ρ_0U_0^2/2 and the displacement thickness δ^∗. The simulated pressure spectrum exhibits two distinct regimes: a -1 scaling regime and a -5 scaling regime. The -1 scaling is associated with the eddies present in the logarithmic region of the boundary layer. These eddies contribute to the energy distribution in the low-frequency range of the spectrum. On the other hand, the -5 scaling, appeared in the high-frequency range, is related to the presence of smaller-scale eddies within the buffer layer <cit.>.The mean convection velocity U_c can be estimated by determining the time delay τ corresponding to the correlation peak shown in figure <ref> for a fixed spatial separation ξ, i.e. U_c(ξ)=ξτ.On the other hand, to obtain a frequency-dependent convection velocity, we can use the cross-spectral density G_pp(ξ,η,ω), which is a complex function <cit.>. Let θ_p(ξ,ω) denotes the phase of G_pp(ξ,0,ω), then the phase velocity in the streamwise direction U_cp(ξ,ω) can be determined by <cit.> U_cp(ξ,ω)=-ωξθ_p(ξ,ω).The cross-spectral density can be written asG_pp(ξ,η,ω)=|G_pp(ξ,η,ω)|e^-iξω/U_cp(ξ,ω).The exponential term in (<ref>) represents the convection behavior of turbulent eddies, where the convection velocity is expected to be dependent on the frequency ω and the separation distance ξ.Figure <ref>(a) shows the comparison between the simulated mean convection velocity and the experimental measurements by <cit.>. It can be seen that there is good agreement between the two results. As the streamwise separation increases, the mean convection velocity also increases and approaches 0.8. Figure <ref>(b) shows the variations of the phase velocity as a function of frequency for various fixed streamwise separations. The convection velocities calculated using the empirical models proposed by <cit.> and <cit.> (see Appendix <ref>) are also shown for comparison. It is evident that the phase velocity increases with increasing streamwise separation, and as the frequency increases, the phase velocity rises rapidly, reaches a peak velocity, and then decays slowly. This suggests that the assumption of frozen turbulence, which assumes that all eddies in the turbulent boundary layer convect at the same velocity, is not strictly valid <cit.>. Both the Smol'yakov model and the Bies model agree with the numerical results at intermediate and high frequencies, but the Bies model fails to capture the characteristics at low frequencies. Figure <ref> examines the streamwise and spanwise coherences of pressure fluctuations. We see that the contour shapes of the coherence functions in the two directions are similar. For a fixed non-zero separation, the coherence increases with the increase of frequency and then decays. This behavior indicates that the low-frequency components, associated with large-scale structures, maintain their coherence over longer distances, while the high-frequency components lose their coherence more rapidly with increasing separation. Particularly, for the non-dimensional frequency ωδ^∗/U_0>1, the wall pressure fluctuations quickly lose their coherence as the separation increases, indicating that the perfect coherence assumed by the frozen turbulence might lead to significant errors. The coherence contours provide valuable information for determining the frequency-dependent correlation lengths.Equations (<ref>) and (<ref>) provide the definitions of the streamwise and spanwise frequency-dependent correlation lengths. However, in practical applications, curve-fitting approaches are often employed. For each discrete frequency, the frequency-dependent correlation lengths can be assumed in exponential forms, i.e. γ(ξ,0,ω)=e^-|ξ|/l_x(ω), γ(0,η,ω)=e^-|η|/l_z(ω).In figure <ref>, the frequency-dependent correlation lengths in the streamwise and spanwise directions are plotted as functions of frequency. Three empirical models, i.e. the Corcos model <cit.>, the Smol'yakov model <cit.>, and the Hu model <cit.> are also shown for comparisons, whose formulations can be found in Appendix <ref>. It can be seen that both correlation lengths increase slightly as frequency increases in the low-frequency range. When the frequency further increases, l_x(ω) and l_z(ω) decay rapidly. All three empirical models exhibit similar decay trends within the intermediate- and high-frequency ranges. However, the Smol'yakov model and the Hu model could capture the characteristics at low frequencies. In addition, at higher frequencies, we can see that the simulated correlation lengths decay slowly and even increase. This phenomenon can also be found in the study of <cit.> and a mesh refinement may be helpful to obtain improved decay tendencies of the frequency-dependent correlation lengths in this regime.An interesting observation is that for the same frequency, l_x(ω)>l_z(ω). This is in contrast to the streamwise frequency-independent correlation length Λ_x, which is smaller than the spanwise correlation length Λ_z. This phenomenon can be attributed to the convection of turbulence in the streamwise direction. Considering the definition of the frequency-dependent correlation length, we havel_x(ω)=∫_0^∞|G_pp(ξ,0,ω)|ϕ(ω)dξ. From (<ref>), we can see that l_x(ω) characterizes the correlation that eliminates the effect of the streamwise convection of turbulent eddies. Physically, this represents the correlation length measured in a coordinate frame that moves with the eddy.Introducing the complex form of the cross-spectral density, the space-time correlation in the streamwise direction can be written asQ_pp(ξ,0,τ)=∫_-∞^∞|G_pp(ξ,0,ω)|e^iω(τ-ξ/U_cp(ξ,ω))dω.Therefore, the frequency-independent correlation length readsΛ_x=1Q_pp(0,0,0)∫_0^∞∫_-∞^∞|G_pp(ξ,0,ω)|e^iω(τ- ξ/U_cp(ξ,ω))dωdξ. Comparing (<ref>) and (<ref>), it is clear that the calculation of the streamwise frequency-independent correlation length Λ_x takes into account the influence of the convection of turbulent structures. On the other hand, the frequency-dependent correlation length l_x(ω) does not. Since the convection of eddies contributes to the decay of the correlation, it is possible that Λ_x < Λ_z even though l_x(ω)>l_y(ω). In this part, we have examined two important features of wall pressure fluctuations that are omitted by the frozen turbulence assumption. First, the convection velocity is not strictly constant. Second, the eddies lose their coherence as they convect downstream, leading to a finite streamwise correlation length. These two characteristics are important non-frozen properties and must be accounted for in the modelling of the far-field noise emitted from serrated trailing edges. § ACOUSTIC PREDICTION §.§ Model establishment With the properties of wall pressure fluctuations and the numerical results discussed above, we are in a position to consider the influence of non-frozen turbulence on the noise prediction for serrated trailing edges. As shown in figure <ref>, consider a flat plate encountering a uniform flow. The plate has a chord length of c, a span of d, and a trailing edge with serrations of amplitude 2h and wavelength λ. According to <cit.>, for a general wall pressure fluctuation characterized by its wavenumber-frequency spectrum Π(k_1, k_2, ω), the far-field acoustic power spectral density S_pp at the observer position X_0=(X_0,Y_0,Z_0) is found to beS_pp(X_0,ω)=(ω Y_0c4 c_0S_0^2)^22 d∑_m=-∞^∞∫_-∞^∞|ℒ(k_1,2m/λ,ω)|^2Π(k_1,2m/λ,ω)dk_1,where c_0 is the speed of sound, S_0^2=X_0^2+(1-M_0^2)(Y_0^2+Z_0^2), and M_0 is the Mach number of the flow. ℒ is the response function which could be calculated iteratively. Note no assumption regarding the frozen property of the wall pressure fluctuations is assumed here. More details about this model can be found in <cit.>. It can be seen from (<ref>) that the formulation relies on the wavenumber-frequency spectrum as input and involves an infinite integral over the streamwise wavenumber k_1. While this integral can be evaluated numerically, such an approach would be computationally demanding. Furthermore, obtaining an accurate wavenumber-frequency spectrum Π(k_1,k_2,ω) is challenging in both numerical simulations and experimental measurements. To achieve a convenient prediction and determine the physical impact of non-frozen turbulence on TE noise, we aim to develop a simplified model based on the characteristics of wall pressure fluctuations. Specifically, we approximate the cross-spectral density using a variable-separation form,G_pp(ξ,η,ω)=γ(ξ,0,ω)e^-iξω/U_c(ω)G_pp(0,η,ω). This form is similar to the Corcos model <cit.>, which was initially developed by fitting experimental data and has since been widely used in the modelling of wall pressure fluctuations. However, the Corcos model is more stringent as it assumes that the normalized cross-spectral density can be represented by a function that depends on a single dimensionless variable. Here, the convection velocity is expressed as a function of the angular frequency, while its dependency on the streamwise separation is neglected for simplicity. By performing Fourier transforms, the wavenumber-frequency spectrum can be written asΠ(k_1,k_2,ω) =1(2)^2∫_-∞^∞∫_-∞^∞γ(ξ,0,ω)e^-iξω/U_c(ω)G_pp(0,η,ω)e^ik_1ξe^ik_2ηdξdη=ϕ_x(k_1,ω)ϕ_z(k_2,ω),whereϕ_x(k_1,ω) = 12∫_-∞^∞γ(ξ,0,ω)e^-iξω/U_c(ω)e^ik_1ξdξ,andϕ_z(k_2,ω)=12∫_-∞^∞G_pp(0,η,ω)e^ik_2ηdη.Here, ϕ_z(k_2,ω) is the spanwise wavenumber-frequency spectrum while ϕ_x(k_1,ω) denotes the effects of both the coherence decay and the convection of turbulent eddies. Thus, S_pp can be shown to beS_pp(X_0,ω)=(ω Y_0c4 c_0S_0^2)^22 d∑_m=-∞^∞∫_-∞^∞|ℒ(k_1,2m/λ),ω|^2ϕ_x(k_1,ω)dk_1ϕ_z(2m/λ,ω).Equation (<ref>) shows that the form of ϕ_x plays an important role in estimating the integral and hence the far-field sound.Under the frozen turbulence assumption, as discussed in <ref>, the streamwise coherence function is equal to one, and the convection velocity is assumed to be a constant value. This implies that all eddies convect at the same velocity while maintaining their coherence. As a result, the cross-spectral density is found to beG_pp(ξ,η,ω) =G_pp(0,η,ω)e^-iωξ/U_c. It follows that ϕ_x(k_1,ω)=δ(k_1-ω/U_c). Substituting (<ref>) into (<ref>), we recoverS_pp(X_0,ω)=(ω Y_0c4 c_0S_0^2)^22 d∑_m=-∞^∞|ℒ(k̅_1,2m/λ,ω)|^2ϕ_z(2m/λ,ω),where k̅_1=ω/U_c. Equation (<ref>) is identical to (2.56) in <cit.>. In this case, a given frequency ω is assumed to be associated with a specific wavenumber k̅_1 through the convection velocity U_c. Therefore, only the convection of eddies is considered, while the streamwise distortion is neglected. In terms of noise prediction models, the sound response function ℒ sees only the value at the convective wavenumber k̅_1. This simplification may be the reason why most analytical models overestimate the noise reduction of serrated trailing edges.To account for the non-frozen nature of the turbulent boundary layer quantitatively, a coherence decay function is needed. We useγ(ξ,0,ω)=e^-|ξ|/l_x(ω),as informed by (<ref>). By employing this approximation, we can evaluate the integral in (<ref>) and obtainϕ_x(k_1,ω)=l_x(ω)[1+(k_1-ω/U_c(ω))^2l_x^2(ω)]. Equation (<ref>) incorporates the decay of streamwise coherence caused by the distortion of turbulent eddies as they convect downstream. It can be seen that wavenumbers around ω/U_c(ω) still play a significant role in shaping the spectrum, however, the contribution to the spectrum is not limited to a one-to-one correspondence between a given frequency and a specific wavenumber. This spreading phenomenon is a notable spectral characteristic of the wall pressure fluctuations beneath a turbulent boundary. In the following analysis, we adopt the notation k̃_1(ω) to represent the frequency-dependent convective wavenumber, i.e. k̃_1(ω)=ω/U_c(ω). This parameter represents the dominant wavenumber on the frequency of ω.To obtain the acoustic prediction, we need to evaluate the integral of |ℒ|^2ϕ_x over the streamwise wavenumber k_1. As mentioned at the beginning of this section, rather than relying on numerical techniques, we aim to obtain a compact estimation for the integral, enabling a convenient prediction and determining the physical impact of non-frozen turbulence on TE noise. §.§ Approximation of the model According to the Mean Value Theorem for Integrals <cit.>, for a given frequency ω and spanwise mode m, there exists a characteristic wavenumber k_1,m^∗(ω) such that∫_-∞^∞|ℒ(k_1,2m/λ,ω)|^2ϕ_x(k_1,ω)dk_1=|ℒ(k_1,m^∗(ω),2m/λ,ω)|^2∫_-∞^∞ϕ_x(k_1,ω)dk_1. Substituting the expression of ϕ_x given in (<ref>) into (<ref>) and recognizing that∫_-∞^∞l_x(ω)[1+(k_1-k̃_1(ω))^2l_x^2(ω)]dk_1=1, we have ∫_-∞^∞|ℒ(k_1,2m/λ,ω)|^2ϕ_x(k_1,ω)dk_1=|ℒ(C_m(ω)k̃_1(ω),2m/λ,ω)|^2. Substituting (<ref>) into (<ref>), we see that the resulting non-frozen model is identical to the frozen model apart from the introduction of a correction coefficient C_m(ω)=k_1,m^∗(ω)/k̃_1(ω) to the dominant convection wavenumber. For a given frequency, by determining the correction coefficient C_m, the far-field noise can be calculated using essentially the same frozen prediction model. However, due to the complex nature of the response function, determining this coefficient is challenging. Therefore, in subsequent analysis, we aim to find a convenient approximation for |ℒ|^2. From the analysis of <cit.>, it is known that|ℒ|^2∼ O(|a_m|^2),wherea_m=e^im/22sinc(k_1h-m/2)+e^-im/22sinc(k_1h+m/2). Examining the shape of |a_m|^2, we see that it can be very well approximated byH_m(k_1h)=14(1(k_1h+m/2)^2+m+1(k_1h-m/2)^2+m),wherem=1-1m+2. It can be seen that H_m is a purely algebraic function of k_1h. Therefore, the correction coefficient can be determined analytically by solvingH_m(C_mk̃_1h)=∫_-∞^∞H_m(k_1h)ϕ_x(k_1,ω)dk_1 .The integral in (<ref>) can be found analytically to be I_m/4, and the explicit form of I_m is provided in Appendix <ref>. Subsequently, the correction coefficient C_m can be computed as C_m=√(m^2^24-m+1+√(1+m^2^2I_m-m^2^2m I_m^2)I_m)/σ_2,where σ_1=h/l_x and σ_2=k̃_1h. The correction coefficient C_m is determined by σ_1 and σ_2 only.With the introduction and evaluation of C_m, , the far-field noise spectrum is shown to beS_pp(X_0,ω)=(ω Y_0c4 c_0S_0^2)^22 d∑_m=-∞^∞|ℒ(C_m(ω)k̃_1(ω),2m/λ,ω)|^2ϕ_z(2m/λ,ω). Equation (<ref>) is purposely cast into the same form as the frozen model so that the effects of non-frozen turbulence can be accounted for conveniently by a single correction coefficient C_m(ω). In the following section, we will show the prediction results obtained using (<ref>), along with a discussion of the rationality behind the approximations used in this section. §.§ Prediction results and discussion Using the new model that incorporates the impact of non-frozen turbulence, we can now predict the far-field noise. We apply the model to flat plates with wide and narrow serrations and use the similar parameters to those employed in the preceding numerical simulations. In particular, the Mach number is chosen as M_0=0.03, while the chord length of the flat plate is chosen as c=1.12 m. The streamwise correlation length l_x(ω) and the spanwise wavenumber-frequency spectrum ϕ_z(k_2,ω) are both obtained from the numerical simulation. As shown in figure <ref>, Smol'yakov's model better captures the frequency-dependent variation of the convection velocity, hence it is selected to compute the frequency-dependent convection velocity for the new model. For the frozen model, the same computed spanwise wavenumber-frequency spectrum and a constant convection velocity U_c=0.7U_0 are used. The non-dimensionalized form of the far-field power spectral densityΨ(X,ω)=2 S_pp(X,ω)C_∗(ρ_0v_∗^2)^2(d/c_0),is used to facilitate a direct comparison with results from frozen models, where C_∗≈0.1553 and v_∗≈0.03U_0. Figure <ref> presents the predicted far-field noise using both the frozen and non-frozen models. The observer is positioned at 90^∘ above the trailing edge and the distance between the observer and the trailing edge is equal to the chord length c. The wide serration has a size of λ/h=2 and h/c=0.025, while the narrow serration has a size of λ/h=0.4 andh/c=0.05. For the straight trailing edge, results obtained using both a constant convection velocity U_c=0.7U_0 and the convection velocity calculated by the Smol'yakov model are shown. It can be seen that there are minimal discrepancies between the results obtained using these two types of convection velocities. Therefore, the frequency-dependent variation in convection velocity has a limited influence on the noise prediction for straight trailing edges. Significant noise reduction predicted by the frozen model can be seen within the intermediate- and high-frequency range for both wide and narrow serrations. Notably, narrow serrations perform better at intermediate frequencies, achieving noise reductions of over 15 dB. However, the new model that accounts for non-frozen turbulence predicts less significant noise reduction for both wide and narrow serrations. Moreover, the noise reduction appears less pronounced for narrow serrations compared to wide serrations. Such results have been reported in a number of experiments <cit.>. The long-standing discrepancies observed between the previous model and experiments may be explained if non-frozen turbulence is considered. The directivity patterns predicted by frozen and non-frozen models for different Mach numbers and frequencies are shown in figure <ref>. The frozen model assumes a constant convection velocity of U_c=0.7U_0. It can be seen that the presence of serrated trailing edges significantly influences the directivity patterns, especially at higher frequencies (see figures <ref>b and <ref>d). Similar to those observed in <ref>, the new model predicts reduced levels of noise reduction for both wide and narrow serrations, while the shapes of the directivity patterns remain similar. From figure <ref>, we can also see that the noise reduction effects are more pronounced in the regions located in front of and behind the flat plate.In the prediction results shown above, the frequency-dependent correlation length and the spanwise wavenumber-frequency spectrum are obtained through numerical simulations. However, considering the high computational cost associated with computational fluid dynamics (CFD) or experimental investigations, the use of empirical models may be more convenient for practical applications. For example, we can use Chase's model to estimate the spanwise wavenumber-frequency spectrum, which is given by <cit.>ϕ_z(k_2,ω)≈4C_∗ρ_0^2v_∗^4(ω/U_c)^2δ^4U_c(((ω/U_c)^2+k_2^2)δ^2+χ^2)^2.Here, χ≈1.33, and the boundary layer thickness δ is estimated using δ/c=0.382Re_c^-1/5, where Re_c represents the Reynolds number based on the chord length. In addition, the Smol'yakov model <cit.> can be used to obtain the streamwise correlation length. Therefore, the new model demonstrates applicability across a broader range of scenarios.To further investigate the accuracy of the new model, a comparison is conducted between the predicted noise reduction and that from experimental measurements by <cit.>. Predictions from earlier models are also included for comparison. The experimental study used a NACA 65(12)-10 airfoil at a 5^∘ angle of attack with a free-stream velocity of U_0=40 m/s. Two different sizes of serrations were used, namely λ/h=0.6 and λ/h=0.1. Due to the unavailability of the wall pressure statistics in the experiments, Chase's and the Smolyakov models are used in the prediction. The comparison results are shown in figure <ref>. In the experiments, noise reductions are observed at intermediate frequencies, albeit not exceeding approximately 5 dB. Conversely, noise increment appears at high frequencies. This phenomenon is consistent with the observations reported in the experiments conducted by <cit.>, which investigated full-scale serrated wind turbine blades. Howe's model exhibits a significant overprediction of the noise reduction, reaching a maximum ΔSPL of 30 dB for the narrow serrations (see figure <ref>b). On the other hand, Lyu's original model demonstrates a more realistic prediction, but overestimation is still pronounced. Comparatively, the discrepancies between the prediction using the new model and the experimental measurements are considerably smaller for both serrations. Therefore, we see that the frozen turbulence assumption contributes significantly to the overestimation of noise reduction. Interestingly, as shown in figure <ref>(a), noise increase at high frequencies is also predicted by the new model. This increment phenomenon has been attributed to the increased turbulent intensity between serration teeth <cit.>. However, the present model suggests another possible cause of the noise increase at high frequencies. Figure <ref> clearly shows that it is necessary to account for the influence of non-frozen turbulence in acoustic predictions. Equation (<ref>) shows that the effects of non-frozen turbulence are captured by a single coefficient C_m(ω). We can therefore study its variation and gain a better understanding of the consequence of including non-frozen turbulence. Figure <ref> presents the variation of the correction coefficient C_m as a function of σ_1 and σ_2. As shown in <ref>, σ_1 is defined as the ratio of the half amplitude to the frequency-dependent correlation length while σ_2 repsents the non-dimensional frequency. From figure <ref>(a), we can see that for m=0, the correction coefficient C_m initially decreases and then increases with the increase of σ_1 when σ_2 is small. However, when σ_2 attains large values, C_m decreases monotonically. This implies that at high frequencies, more correctness is needed for the serrations with larger amplitudes. As shown in figure <ref>(b), when σ_1 is set to 0, C_m maintains a value of 1, indicating that no correction is needed for the straight trailing edge. When σ_1 is larger, C_m initially decreases and then remains virtually constant. This suggests that, for serrated trailing edges, the correctness is nearly frequency-independent in the high-frequency range. For m=3, as shown in figures <ref>(c) and <ref>(d), C_m continues to exhibit minor changes for large values of σ_2. However, the variation is more significant than m=0 when σ_2 is small.To understand why the frozen turbulence assumption tends to overestimate noise reduction, we can study the integrand shown in (<ref>) and its approximation in detail. To achieve that, the response function |ℒ|^2 and its approximation function H_m, as well as the shape of ϕ_x under non-frozen turbulence condition are shown in figures <ref> and <ref>. Note that the response function has been scaled by a constant factor for clearer comparison, without affecting the validity of the approximation. In addition, as pointed out by <cit.>, the incorporation of the incident pressure raises the far-field sound by 6 dB. Here, the response function of only the scattered pressure is considered. In figure <ref>, the comparison results at a frequency of f=600 Hz with a Mach number of M_0=0.03 are presented. It can be seen that the response function exhibits peaks near k_1=0 for m=0, but two peaks for m=9. The approximation function H_m accurately captures the shape of the response function, particularly for m=9, indicating the validity of the approximation. As anticipated, it can be seen that the function ϕ_x exhibits a clear ridge at the streamwise wavenumber k̃_1. Since the calculation of far-field sound involves integrating the product of |ℒ|^2 and ϕ_x over the wavenumber k_1 (see (<ref>)), it is evident that the values near the peaks of both the response function and the convective ridge are important to the integral. However, assuming frozen turbulence results in the form of the Dirac delta function for ϕ_x. Therefore, only the value of the response function at k̅_1 is used, neglecting the influence of the shape of the response function. In contrast, when the turbulent boundary layer is not frozen, the shapes of both the response function and the convective ridge play important roles. For instance, as shown in figure <ref>(a). The peaks of the response function and the convective ridge are sufficiently far away and the peaks near both k_1=0 and k_1=ω/U_c(ω) contribute significantly to the integral. But if the single value |ℒ(k̅_1,k_2,ω)|^2 is used to represent the value of the integral as assumed by the frozen turbulence, the predicted far-field sound would become significantly lower than using the integral value. Conversely, if the peaks of the response function and the convective ridge are close to each other (see figure <ref>b), the frozen turbulence assumption tends to predict a higher result. However, as shown in (<ref>), the overall far-field noise is the sum of contributions from all modes. It is known that the contribution of higher modes is less significant compared to lower modes. Therefore, for intermediate and high frequencies, where the peaks of the response function of the dominant modes and the convective ridge are not closely aligned, the analytical models assuming frozen turbulence would predict lower noise levels for serrated trailing edges. With regard to the impact of serration sizes, it can be seen from figures <ref>(c) and <ref>(d) that the shape of the response function is sharper for narrow serrations. Thus, more correctness to the convective wavenumber may be needed for narrow serrations.To examine the effect of the Mach number, we present a similar comparison with a higher Mach number M_0=0.2 at a frequency of f=2500 Hz, as shown in figure <ref>. It can be seen that with an increased Mach number, the convective ridge exhibits a sharper peak. Moreover, the convection velocity also increases with the increase of the Mach number. As shown in figure <ref>(a), the peaks of the response function and the convective ridge become closer compared to the results shown in figure <ref>, despite the frequency being increased from 600 Hz to 2500 Hz. This indicates that less correctness to the convective wavenumber may be needed for higher Mach numbers. The above analysis suggests that relying on the frozen turbulence assumption would result in more significant discrepancies for narrow serrations, especially at low Mach numbers.In conclusion, we see that the frozen turbulence assumption may lead to lower or higher sound predictions for different spanwise modes, depending on the distance between the peaks of the response function and the convective ridge. However, when considering the collective contribution of all modes, lower noise levels would be predicted for serrated trailing edges in the intermediate- and high-frequency ranges. In other words, the frozen turbulence assumption results in an overestimated noise reduction. It is worth noting that the new model proposed in this study may have limited effectiveness when applied to very low frequencies. The first reason is that the phase velocity demonstrates significant variations within this frequency range, as shown in figure <ref>(b). The second reason is that the exponential decay function assumed in the previous analysis may not accurately capture the behavior of the coherence loss. In this case, the introduction of a Gaussian phase decay term might be helpful to provide a more appropriate description of the coherence decay <cit.>. Nevertheless, in practical applications, it is the intermediate- and high-frequency ranges that are of most interest, because this is where noise reduction occurs. § PHYSICAL MECHANISM In <ref>, a mathematical examination was conducted to explain the impact of the frozen turbulence assumption. In this section, we aim to elucidate the underlying physical mechanism related to noise reduction using non-frozen turbulence. Previous work has shown that the physical mechanism behind the noise reduction can be attributed to the destructive interference of the scattered pressure <cit.>. To explore the efficiency of this interference, we show the scattered surface pressure distributions at a fixed frequency of 2000 Hz with the Mach number M_0=0.2 in figure <ref>. The horizontal and vertical axes are scaled by the spanwise and streamwise frequency-dependent correlation lengths obtained using the Smol'yakov model, respectively. Hence, the distance between adjacent parallel dashed lines denotes the correlation length at this particular frequency.As shown in figures <ref>(a) and <ref>(b), little phase variation appears when k_1h attains a small value, indicating a weak noise reduction. However, as k_1h increases, significant phase variation along the serration edges can be observed (see figures <ref>c and <ref>d). The interference resulting from this phase variation leads to noise reductions in the far field. Under the frozen turbulence assumption, the streamwise correlation length is assumed to be infinitely large. Consequently, the phase variations along the entire serration edges are considered to contribute to the destructive interference (assuming spanwise coherence is sufficiently large for now). However, in realistic non-frozen flows, only the phase variation within the streamwise correlation length is effective in the destructive interference. From figures <ref>(c) and <ref>(d), it can be seen that the serration amplitudes are 2-5 times larger than the streamwise correlation lengths, highlighting the significance of considering the streamwise length scale for accurate noise predictions.In the above discussion, we purposely ignored the effects of spanwise correlation length for a simpler illustration. In realistic flows, both the spanwise and streamwise correlation lengths are important in determining the efficiency of the destructive interference. In fact, we can see from figure <ref> the 2D grids formed by the dashed lines that represent the streamwise and spanwise correlation lengths. It is within the same grid that the phase variation is effective. The 2D grid reflects the 3D structures of the turbulent flow. It can be seen that with the increase of k_1h, the grids become denser, indicating the effective area of destructive interference is smaller. Therefore the non-frozen correction must be included, particularly at high frequencies. The appearance of the 2D grid also explains why noise reduction does not increases monotonically with serration sharpness - the effective area of the destructive interference is heavily restricted by the spanwise and streamwise correlation lengths. To provide further clarity on the phase variation along the serration edges, we plot the real and imaginary parts of the scattered pressure for different values of k_1h in figure <ref>. The streamwise coordinates are normalized by half the amplitude of the serration h. Similarly, the distance between adjacent dashed lines corresponds to the streamwise correlation length. It can be seen from figure <ref>(a) that the real part remains negative for k_1h=2 and the imaginary part exhibits a phase that changes sign over the serration edge. Although the correlation length is larger than the serration amplitude, the noise reduction effect is not significant due to the insignificant phase variations. In figure <ref>(b), for k_1h=6, the phase differences of the scattered pressure are more significant, especially for the imaginary part. When k_1h becomes larger, as shown in figures <ref>(c) and <ref>(d), strong variation can be seen for both the real and imaginary parts along the serration edge. However, it can be seen that these variations are less pronounced within a streamwise correlation length. Therefore, the interference effectiveness is not as strong as that assumed by the frozen turbulence. This explains why the frozen turbulence assumption tends to overestimate the noise reduction when employed in noise prediction models.In the case of leading-edge (LE) noise problems, where the inflow is typically uniform, the turbulent upwash velocity spectra can be accurately captured by various models, such as the von Kármán spectrum model <cit.>. By assuming frozen turbulence, analytical prediction models can provide realistic results for the noise emitted from LE serrations <cit.>. However, this is not the case for TE serrations. As the turbulent boundary layer develops on a flat plate or an airfoil, the turbulent eddies undergo severe distortions due to the strong shear stresses, leading to significant streamwise coherence decay. Therefore, the impact of non-frozen turbulence must be taken into account. As shown in this study, a finite streamwise correlation length is introduced into the noise prediction model, resulting in significantly improved prediction accuracy. § CONCLUSION This study investigates the impact of non-frozen turbulence on the noise prediction model for serrated trailing edges by analyzing the statistical characteristics of wall pressure fluctuations. A fully-developed turbulent boundary layer is simulated using LES, with the turbulence at the inlet generated by DFM. The accuracy of the simulated mean flow statistics is validated against DNS and a previous study by <cit.>. The simulation results demonstrate that as the spatial separations increase, the streamwise-spanwise correlation contour changes from circular to oval. Additionally, the space-time correlation contour lines concentrate into a narrow band. The mean convection velocity increases with the increase of streamwise separation while the phase velocities for a fixed streamwise separation initially increase and then decay with increasing frequency. Coherence function contours for both streamwise and spanwise directions are presented. The variation of the streamwise frequency-dependent correlation length indicates that the infinite streamwise correlation length assumed by frozen turbulence is not appropriate.Lyu's model for serrated trailing edges is used as the basis for developing a non-frozen noise prediction model. This model involves integrating the product of the response function and the wavenumber-frequency spectrum over the streamwise wavenumber. Based on the statistical analysis of wall pressure fluctuations, an exponential coherence decay function is assumed, departing from the constant value employed under the frozen turbulence assumption. By examining the properties of the response function, an approximation function is introduced, allowing for the inclusion of a correction coefficient to account for the impact of non-frozen turbulence. Two non-dimensional parameters are identified to be critical for the non-frozen correctness, i.e. σ_1=h/l_x and σ_2=k̃h. The far-field sound spectra for different serration sizes demonstrate that the new model predicts lower noise reduction. Comparative analysis with the experimental measurements of <cit.> demonstrates that the new model has significantly better prediction capability. Furthermore, the noise increase at high frequencies may also be captured by the new model, suggesting a new cause for the high-frequency noise increase observed in various experiments. Through an examination of the response function and the convective ridge, it is shown that the far-field noise depends on the relative positions of their peaks. The physical mechanism underlying the overprediction of noise models employing the frozen turbulence assumption is found to be an overestimated destructive interference of the scatted pressure. As the non-dimensional parameter k_1h increases, the streamwise correlation length becomes shorter than the amplitude of the serration. Only the phase variations within a streamwise correlation length can result in effective destructive interference. Consequently, the far-field noise is larger compared to that predicted under the frozen turbulence assumption. This highlights the importance of the non-dimensional parameter h/l_x as a crucial factor in determining the efficiency of destructive interference along the serration edge.It should be noted that the installation of serrations may alter the flow field near the trailing edge. The spectral properties may also change near the serrations <cit.>. The present model relaxes the frozen turbulence assumption but still assumes that turbulence is statistically homogeneous in the streamwise direction. In applications where aerodynamic loading is present, the streamwise inhomogeneity may also need to be considered. This will be studied in our future works.§ ACKNOWLEDGMENT The first author (H.T.) would like to express gratitude to Dr. Yi Wang at Huairou Laboratory for the fruitful discussions on numerical simulations. The authors also acknowledge the financial support from Laoshan Laboratory under grant number LSKJ202202000.§ DECLARATION OF INTERESTS The authors report no conflict of interest.§ MESH CONVERGENCE TEST To evaluate the convergence of the computational mesh, we use three different meshes: coarse, middle, and fine. These grid sizes are listed in Table <ref>. The resulting mean velocity and fluctuating streamwise velocity profiles, computed using these three meshes, are shown in figure <ref>.We can see the discrepancies between the results obtained with the fine and middle meshes were smaller compared to those between the middle and coarse meshes, particularly for the fluctuating streamwise velocity (see figure <ref>b). Therefore, middle-size mesh is used in the simulation.§ EMPIRICAL CONVECTION VELOCITY MODELS The Bies model is given by <cit.>U_c(ω)U_0=(U_0ωδ^∗)^0.055-0.3. The Smol'yakov model is given by <cit.>U_c(ω)U_0=1.6ωδ^∗/U_01+16(ωδ^∗/U_0)^2+0.6. § EMPIRICAL FREQUENCY-DEPENDENT CORRELATION LENGTH MODELS The Corcos model <cit.> can be written asl_x,z(ω)=U_cα_x,zω,where α_x=0.11, α_z=0.73 and U_c=0.7U_0 are used.The Smal'yakov model can be expressed as <cit.>l_x,z(ω)=U_cα_x,zωA^-1,withA=[1-β U_cωδ^∗+(β U_cωδ^∗)^2]^1/2.Here, α_x=0.124, α_z=0.8 and β=0.25. The convection velocity is determined by employing (<ref>) and δ^∗ is approximated using δ^∗/c≈0.048/Re_c^1/5.Based on Goody's model <cit.>, <cit.> proposed an expression for the frequency-dependent correlation length, which can be written asl_x(ω)/δ^*=a(ωδ^* / U_0)^0.3/(b+(ωδ^* / U_0)^3.854)^0.389,with a=1.357 ln(Re_θ)-6.713, b=1.183Re_θ^-0.593. Andl_z(ω)/δ^*=a(ωδ^* / U_0)^1.0/(b+(ωδ^* / U_0)^3.073)^0.651,with a=0.079 ln(Re_θ)+0.155, b=0.348Re_θ^-0.495.§ EXPRESSION OF THE INTEGRATED VALUE I_MThe expression of the integral result shown in <ref> is written asI_m=I_m1+I_m2I_m3+I_m4+I_m5+I_m6I_m7+I_m8,whereI_m1=4√(m)(4m+^2m^2+4 mσ_2+4σ_2^2-4σ_1^2), I_m2=4(-4mσ_1+^2m^2σ_1+4 mσ_1σ_2+4σ_1σ_2^2+4σ_1^3), I_m3=√(m)(16m^2+8( m+2σ_2-2σ_1 )( m+2σ_2+2σ_1)m+^4m^4+8^3σ_2 m^3), I_m4=√(m)(24^2m^2σ_2^2+8^2m^2σ_1^2+32 mσ_2^3+32 mσ_1^2σ_2+16σ_2^4+32σ_1^2σ_2^2+16σ_1^4), I_m5=4√(m)(4m+^2m^2-4 mσ_2+4σ_2^2-4σ_1^2), I_m6=4(-4mσ_1+^2m^2σ_1-4 mσ_1σ_2+4σ_1σ_2^2+4σ_1^3), I_m7=√(m)(16m^2+8( m-2σ_2-2σ_1 )( m-2σ_2+2σ_1)m+^4m^4-8^3σ_2 m^3), I_m8=√(m)(24^2m^2σ_2^2+8^2m^2σ_1^2-32 mσ_2^3-32 mσ_1^2σ_2+16σ_2^4+32σ_1^2σ_2^2+16σ_1^4). jfm | http://arxiv.org/abs/2312.16067v1 | {
"authors": [
"Haopeng Tian",
"Benshuai Lyu"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231226144155",
"title": "The impact of non-frozen turbulence on the modelling of the noise from serrated trailing edges"
} |
Financial support through FWF-projects Y0782 and P35197 is gratefully acknowledged. The Knothe–Rosenblatt distance and its induced topology M. Beiglböck, G. Pammer, and A. Posch======================================================= A basic and natural coupling between two probabilities on ^N is given by the Knothe–Rosenblatt coupling. It represents amultiperiod extension of the quantile coupling andis simple to calculate numerically. We consider thedistance on 𝒫 (^N) that is induced by considering the transport costs associated to the Knothe–Rosenblatt coupling.We show that this Knothe–Rosenblatt distance metrizes the adapted weak topology which is a stochastic process version of the usual weak topology and plays an important role, e.g. concerning questions on stability of stochastic control and probabilistic operations.We also establish that the Knothe–Rosenblatt distance is a geodesic distance, give a Skorokhod representation theorem for the adapted weak topology, and provide multi-dimensional versions of our results. keywords: Knothe–Rosenblatt coupling, triangular transformations, adapted weak topology, barycenter of probabilities § INTRODUCTIONWe fix N∈ and write λ for the Lebesgue measure on (0,1). For p∈ [1,∞) we equipL_p(λ^N)=L_p(λ^N; ^N)with the usual norm and the set 𝒫 _p(^N) of probabilities on ^N with finite p-th moments with p-Wasserstein distance inducing p-weak convergence.For p=0 we equip L_0(λ^N) withf-g_0=∫ |f-g|_0 dλ^N where |·|_0 := |·|∧ 1 and consider usual weak convergence on the set𝒫_0(^N) ofprobabilities on ^N.§.§ Quantile processes The Knothe–Rosenblatt coupling is an N-step extension of the quantile coupling which we first recall: For every probability μ onthere is an a.s. unique increasing quantile function Q=Q^μ(0,1)→ such that Q^μ_#(λ)=μ. The quantile coupling or(co-monotone coupling) ofprobabilities μ, ν is given by (Q^μ,Q^ν)_#(λ). In multiple steps, we are interested in functions that are component wise increasing:A mapT=(T_k)_k=1^N(0,1)^N→^N is adapted or triangularif each T_k depends only on the first k variables, i.e. T_k(x_1,…, x_N)=T_k(x_1,…, x_k), (cf. <cit.>). A triangular Q=(Q_k)_k=1^N: (0,1)^N→^Nis a quantile process if for all k≤ m ≤ N, u_1, …, u_m, u_k' ∈ (0,1) with u_k ≤ u'_k Q_k(u_1, …,u_k)≤Q_k(u_1, …, u_k') inc Q_k(u_1, …, u_k) = Q_k(u_1, …, u_k') ⇒Q_m(u_1, …, u_k, …, u_m )= Q_m(u_1, …, u_k', …, u_m). con For μ∈_0(^N) there is an a.s. unique quantile process Q^μ with Q^μ_#(λ^N)=μ.In essence this follows by disintegrating μ into a product of successive kernelsμ(du_1, …, du_N)= μ_1(du_1)μ_2^u_1(du_1) …μ_N^u_1...u_N-1(du_N)and taking the respective quantile functions as components of Q^μ. We call μ triangular regular if all μ_k^u_1...u_k-1, k≤ N, are atom free. In this case the consistency condition (<ref>) can be omitted.Every absolutely continuous measure is triangular regular. We refer to <Ref> for details. §.§ Knothe–Rosenblatt distance and adapted weak topologyFor μ, ν∈𝒫_p(^N)the Knothe–Rosenblatt coupling isgiven by_μ,ν:=(Q^μ, Q^ν)_#(λ^N).Forp∈{0}∪ [1,∞) the p-Knothe–Rosenblatt distance is defined by_p^p(μ,ν):=∫x-y^p_pd _μ,ν(x,y)= ∫ |Q^μ - Q^ν|_p^pdλ^N,where we abuse notation for p=0 and convene thatx-y^0_0:= |x-y|∧ 1.We formulate our first contribution: For p∈{0}∪ [1, ∞) theKnothe–Rosenblatt distance _p metrizes the (p-) adapted weak topology on _p(^N).The adapted weak topology refines the usual weak topology on _p(^N) with the purpose of taking the flow of information into account. It has been introduced independently by different groups of authors: by Aldous <cit.> based on the prediction process, by Hellwig <cit.> using conditional distributions, by Hoover–Keisler <cit.> using adapted functions, and by Pflug–Pichler <cit.>, Rüschendorf <cit.>, Lasalle <cit.>, and Nielson–Sun <cit.> using adapted variants of the Wasserstein distance. Importantly all of these constructions yield the same topology in finite discrete time (<cit.>). We refer to <cit.> for a recent survey and will give a precise definition of the adapted Wasserstein distance in <Ref> below, when we compare the Knothe–Rosenblatt distance and the adapted Wasserstein distance.It is well known (and not hard to see) that convergence in p-Wasserstein distance is equivalent to weak convergence plus convergence of p-th moments. Likewiseconvergence in adapted p-Wasserstein distance is equivalent to adapted weak convergence plus convergence of p-th moments (see e.g. <cit.>). §.§ Isometric embedding and adapted Skorokhod representation theorem It follows from (<ref>)thatQ: (_p(^N), _p) → (L^p(λ^N), . _p) , μ↦ Q^ μ is an isometric embedding for p∈{0}∪ [1, ∞). We can thus rephrase Theorem <ref> as follows:Let p∈{0}∪ [1,∞) and μ_n, μ∈_p (^N), n ∈. Then μ_n →μin the p-adapted weak topology if and only ifQ^μ_n→ Q^μ convergesin ._p. Recall that the Skorokhod representation theorem asserts that for a sequence of probabilities μ_n, μ∈𝒫_p(), n ∈, on the real line we have μ_n →μ weakly if and only if on some probability space (Ω, ) there are random variables X_n∼μ_n, X∼μ, n ∈ with X_n → X in probability. Indeed, in this case one can take (Ω, ):=((0,1), λ), X_n:= Q^μ_n and X := Q^μ.In complete analogy, Corollary <ref> asserts that adapted weak convergence of a sequence of probabilities μ_n →μ is equivalent to the convergence of the respective quantile processes in probability on ((0,1)^N, λ^N).We note that in contrast to the classical Skorokhod theorem, the assertion on almost sure convergence does not hold in the multi-period setting, as we will discuss in <Ref> below. §.§ Completion of (_p(^N), _p) Equipped with the Knothe–Rosenblatt distance _p, the space _p(^N) is not complete.[Also all the other known natural metrics of the weak adapted topology are not complete. In <cit.> it is established that the completion of _p(^N) w.r.t. adapted Wasserstein distance can be identified with the set of filtered stochastic processes.]In view of the isometric embedding (<ref>) it is clear that the completion_p(^N)^_p of _p(^N) consists precisely in the._p-closure of all quantile processes with finite p-th moment. In the next result we characterize this closure. Following <cit.> we call a triangular transformation T=(T_k)_k=1^N:(0,1)^N → increasing if it satisfies (<ref>), i.e. for all k≤ N, u_1, …, u_k, u_k'with u_k ≤ u'_kT_k(u_1, …,u_k)≤T_k(u_1, …, u_k').We callT=(T_k)_k=1^N:(0,1)^N → strictly increasing if itsatisfiesfor all k≤ N, u_1, …, u_k, u_k'with u_k < u'_ksincT_k(u_1, …,u_k) < T_k(u_1, …, u_k').We write L^ IT_p(λ^N) / L^ SIT_p(λ^N) for the set of all increasing / strictly increasing triangular transformations which are p-integrable and we write L^ quantile_p(λ^N) for the set of p-integrable quantile processes.Let p∈{0}∪ [1,∞), then L_p^ SIT(λ^N), L_p^ quantile(λ^N) and L_p^ IT(λ^N) are convex cones with: * The ._p-closure of all p-integrable quantile processes L^ quantile_p(λ^N) consists precisely in L^ IT_p(λ^N). In particularthe completion _p(^N)^_p of _p(^N) isL^ IT_p(λ^N) up to isometry.* μ∈_p(^n) is triangular regular if and only if Q^μ∈ L^ SIT_p(λ^N).* L^ SIT_p(λ^N) is a dense subset of L^ quantile_p(λ^N).The following diagram summarizes the relations between these sets: [ L^ SIT_p(λ^N) = {Q_μ: μ∈_p(^N), μ triangular regular};;L^ quantile_p(λ^N) = {Q_μ: μ∈_p(^N)};;L^ IT_p(λ^N) = {Q_μ: μ∈_p(^N)} ].§.§ Geodesic completeness and barycentersStarting with the work of McCann <cit.> the fact that Wasserstein distance on 𝒫_p(^N) is geodesic has played a tremendous role in optimal transport and its applications.In the case of the Knothe–Rosenblatt distance, the geodesic structure becomes very simple due to the isometry (<ref>) which maps into the linear space L^p(λ^N).We have seen in Theorem <ref> that the set of quantile processes is a convex cone and thus obtain:The sets _p(^N), {μ∈_p(^N):μ triangular regular}, _p(^N)^_pare geodesically complete w.r.t. _p for p≥ 1 and geodesics are unique for p>1. The notion of Wasserstein barycenter was famously introduced by Agueh and Carlier <cit.>. Naturally we can define the analogous concept for the Knothe–Rosenblatt distance. Using again the isometry (<ref>) it follows that forμ_1, …, μ_n ∈𝒫_p(^N) and convex weights λ_1, …, λ_n there exists a unique (for p>1) Knothe–Rosenblatt barycenteri.e. a probability μ̅∈𝒫_p(^N) that minimizesinf_μ∈𝒫_p(^N)λ_1 _p^p(μ, μ_1) + … + λ_n _p^p(μ, μ_n),indeed μ̅ is given by Q^μ̅= λ_1 Q^μ_1 +… + λ_n Q^μ_n.We also establish that the geometric structure of (_p(^N), _p) is compatible with properties that are interesting from a stochastic process perspective. In particular we show that the set of martingale measures and the set of all probabilities that correspond to predictable processes are geodesically convex, see <Ref>.§.§ Bicausal couplings and adapted Wasserstein distanceAdapted versions of the classical Wasserstein distance have been independently introduced by different authors,see<cit.>. Here`adaptedness' is incorporated in its definition by imposing an additional causality constraint on the set of admissiblecouplings that we introduce next:Given μ, ν∈_0 (^N) the set (μ,ν) consists of all measures π∈_0 (^N×^N) with marginals μ and ν.The set (μ, ν) of bicausal couplings consists of those π whose successive disintegration kernels satisfy a.s. π_k^x_1, …, x_k-1, y_1, …, y_k-1∈(μ_k^x_1, …, x_k-1,ν_k^y_1, …, y_k-1), k ≤ N,where we disintegrate π (μ and ν) analogous to (<ref>) above, i.e. π(dx_1, dy_1, …, dx_N, dy_N) = π_1 (dx_1, dy_1) π_2^x_1, y_1 (dx_2, dy_2) …π_N-1^x_1, y_1 …, x_N, y_N-1 (dx_N, dy_N).If μ, ν are both triangular regular, the set of bicausal couplings is the closure of Monge couplings which are concentrated on the graph of a triangular mapping with triangular inverse, see <cit.>.The adapted Wasserstein distance of μ, ν∈𝒫_p(^N) is given by _p^p(μ, ν):= inf{∫ |x-y|_p^pdπ: π∈(μ, ν)}.A frequently used definition of the Knothe–Rosenblatt coupling is the following:π is the Knothe–Rosenblatt coupling between μ and ν if and only if we have π-almost surely that for k ≤ N and x_1, …, x_k-1, y_1, …, y_k-1 π_k^x_1, …, x_k-1, y_1, …, y_k-1is the quantile coupling of μ_k^x_1, …, x_k-1 and ν_k^y_1, …, y_k-1,see e.g. <cit.>. In particular, the Knothe–Rosenblatt coupling is bicausal and thus_p(μ, ν) ≤_p(μ, ν).We collect a number of further results comparing _p and _p * _p and _p are not equivalent as metrics. Indeed their completions are incomparable as we discuss in Section <ref> below. * _p and _p are equivalent metrics on the set of triangular L-Lipschitz measures, see Theorem(<ref>) below. Here we say that μ is triangular L-Lipschitz if the map^k-1∋ (x_1, …, x_k-1) →μ_k^x_1, …, x_k-1∈_p()is L-Lipschitz w.r.t. the p-Wasserstein distance on _p()for each k≤ n.* In <cit.> Rüschendorf shows that under a monotone regression assumption _p(μ, ν) = _p(μ, ν), see also <cit.> and <cit.>. It is shown in <cit.> that thecompletion of 𝒫_p(^N) w.r.t. the adapted Wasserstein distance can be interpreted as the set _p of all stochastic processes with filtration.While (_p, _p) is geodesically complete,𝒫_p(^N) itself is not geodesically complete when equipped with the adapted Wasserstein distance. We note thatin contrast to the Knothe–Rosenblatt case, _p-compact subsets of _p admit a simple characterizationin the spirit of Prokhorov's theorem, see <cit.>.We consider this as a distinct advantage of _p over_p. §.§ Knothe–Rosenblatt Rearrangement in (^d)^N In view of applications, it is important to consider the adapted weak topology on _p((^d)^N), where d∈ stands for the dimension of the state space, while N∈ denotes the number of time steps.Given that the Knothe–Rosenblatt distance is a particularly simple metric for the adapted weak topology, it is interesting to ask for extensions to d>1. We are aware of two obvious generalizations that are suitable for our purposes. The more interesting (from our perspective) extension of the Knothe–Rosenblatt coupling to multiple dimensions is based on defining the quantile process for the state space ^d.In the case p=2 this amounts to:A triangular Q=(Q_k)_k=1^N: ((0,1)^d)^N→ (^d)^Nis a quantile process if for all k≤ m ≤ N, u_1, …, u_m, u_k' ∈ (0,1) with u_k ≤ u'_k Q_k(u_1, …,u_k-1, ·) is the gradient of a convex function |.|_2-mon Q_k(u_1, …, u_k) = Q_k(u_1, …, u_k') ⇒ Q_m(u_1, …, u_k, …, u_m )= Q_m(u_1, …, u_k', …, u_m). conCondition (<ref>) asserts that Q_k(u_1, …,u_k-1, ·) is a _2-optimal transport map. For general p>0 we would consider _p-optimal mappings, see<cit.>. The monotonicity condition (<ref>) can then be rephrased (for instance) asQ_k(u_1, …,u_k-1, ·) is |· - ·|_p^p-cyclically monotone. |.|_p-monAs above, for each μ∈_p((^d)^N) there exists a unique quantile process Q^μ that pushes (λ^d)^N to μ. Analogously to the one-dimensional case we define a generalized Knothe–Rosenblatt coupling and distance through_μ,ν := law( Q^μ, Q^ν) and_p^p(μ,ν) := ∫ |Q^μ - Q^ν |_p^p dλ^dN.For this construction, our main result extends: Let p > 1. Then the map(_p((^d)^N), _p) → (L_p((λ^d)^N),._p) μ→ Q^μis an isometry. _p metrizes the p-adapted weak topology. That is, for μ, μ_n∈_p((^d)^N), n ∈ we have Q^μ_n→ Q^μ in L_p((λ^d)N)_p(μ_n, μ) → 0 _p(μ_n, μ) → 0.Another possibility is tointerpret the definition of the Knothe–Rosenblatt distance given in (<ref>) in multiple dimensions.Fix p≥ 1 and μ, ν∈_p((^d)^N). We say that π∈(μ, ν) is triangular optimalif we have π-almost surely that for k ≤ N and x_1, …, x_k-1, y_1, …, y_k-1 π_k^x_1, …, x_k-1, y_1, …, y_k-1 is a _p-optimal coupling between μ_k^x_1, …, x_k-1 and ν_k^y_1, …, y_k-1.We denote the set of all triangular optimal couplings by _to(μ, ν) and set_p^p(μ, ν):= sup_π∈_to(μ, ν)∫ |x-y|^pdπ(x,y).With these definitions we have:Let p≥ 1. Then _p induces the p-weak adapted topology. That is,for μ, μ_n∈_p((^d)^N), n∈ we have _p(μ_n, μ) → 0 if and only if _p(μ_n, μ) → 0.However, _pdoes not satisfy the triangular inequality and is thus not a metric. (See section <ref>.)§.§ Literature The Knothe–Rosenblatt coupling was independently introduced by Knothe <cit.> and Rosenblatt <cit.> for applications to geometric inequalities.The notion of triangular transformation was introduced in<cit.> and applied for the derivation of (generalized) Talgrand- and logarithmic Sobolev inequalities.Among further results, this article also establishes that convergence in total variation implies convergence of the corresponding increasing triangular transformations, see also Corollary <ref> below.Triangular transformation have seen increasing use in machine learning. E.g. triangular transformations play a central role in <cit.> where a Metropolis-Hastings algorithm for sampling complex high-dimensional distributions is proposed. In <cit.> sparse triangular transport maps are usedas a tractable class of transformation to tackle filtering and variational inference problems. They are employed for conditional density estimation and structure learning in <cit.>. In <cit.> neural networks with triangular structure are used to parametrize the dual variables of the causal transport problem.We emphasize that following the contributions on quantile / increasing triangular transformations in<cit.> (among others) the results in the preparatory Section <ref> below are folklore.Continuous-time versions of the Knothe–Rosenblatt coupling are presented in <cit.>. In particular these articles extend results of <cit.> on the optimality of the Knothe–Rosenblatt distance in the bi-causal transport problem to continuous time and consider the induced topology on laws of stochastic processes.The article<cit.> connects the Knothe–Rosenblatt coupling to classical optimal transport: for triangular regular probabilities, the Knothe–Rosenblatt transformation is the limit of Brenier maps when assigning progressively smaller weights to later coordinates.Adapted versions of the Wasserstein distance play an important role in stochastic optimization and multistage programming, see e.g.<cit.>. Adapted weak topologies are a useful tool for various problems in mathematical finance, e.g. in the pricing of game options <cit.>, questions of insider trading and enlargement of filtrations <cit.>,stability of pricing / hedging and utility maximization<cit.> andinterest rate uncertainty <cit.>. We refer to <cit.> for a more complete account on the literature on adapted Wasserstein distance and adapted weak topologies. §.§ Organisation of the paperIn Section <ref> we introduce notation and basic results. Specifically, we discuss the representation of the Knothe–Rosenblatt coupling through quantile processes (Lemma <ref>) and the alternative characterization of the Knothe–Rosenblatt coupling given in (<ref>) above. In Section <ref> we give the proof of our main result that the Knothe–Rosenblatt distance induces the weak adapted topology based on the notion of modulus of continuity of measures introduced in <cit.>. As a consequence of our argument we also obtain that _p and _p are equivalent for (uniformly) triangular Lipschitz kernels. In Section <ref> we draw consequences from the isometry (<ref>) between the Knothe–Rosenblatt distance and the p-norm on L_p(λ ^N). In particular we prove the results concerning the metric completion of _p as wellas on geodesic completeness announced above.Finally the appendix is concerned with the Knothe–Rosenblatt distance in the case of a multiple dimensional state space.§ PREPARATIONS§.§ Setting and Notation For a Polish metric space (𝒳, d_𝒳), we denote by 𝒫_p(𝒳) the set of Borel probability measures on 𝒳 that finitely integrate d_𝒳(x,x_0)^p for some (thus any) x_0 ∈𝒳. For the special case p = 0, we use 𝒫_0 and 𝒫 synonymously. When T 𝒳→𝒴 is a measurable map between Polish spaces and μ∈𝒫(𝒳), we write T_#μ for the push-forward measure of μ under T. The product space 𝒳×𝒴 is endowed with the product topology and, in case of Polish metric spaces, with the product metric d_p((x_0,y_0),(x_1,y_1))^p = d_𝒳(x_0,x_1)^p + d_𝒴(y_0,y_1)^p where x_0,x_1 ∈𝒳 and y_0, y_1 ∈𝒴. A probability π∈𝒫(𝒳×𝒴) is called coupling with marginals (μ,ν) ∈𝒫(𝒳) ×𝒫(𝒴)if ^1_#π = μ and ^2_#π = ν where we use ^i to denote the i-th coordinate projection. A coupling π is said to be concentrated on the graph of a measurable function, if there exists a measurable map T 𝒳→𝒴 such that (𝕀,T)_#μ = π.Given a probability μ∈𝒫(𝒳) we write L_p(μ;𝒴) for the set of p-integrable functions (when p ≥ 1) resp. measurable functions (when p = 0) that take values in 𝒴. We equip L_p(μ;𝒴) with the usual L_p-distance whereas we understand under L_0-convergence simply convergence in μ-probability. The p-Wasserstein distance between μ,ν∈𝒫_p(𝒳) is given by𝒲_p(μ,ν) := inf_π∈Π(μ,ν)( ∫ d_𝒳(x,y)^p dπ(x,y) )^1/pp ∈ [1,∞),inf_π∈Π(μ,ν)∫ d_𝒳(x,y) ∧ 1 dπ(x,y) p = 0,and we recall the well-known fact that 𝒲_0 metrizes the topology of weak convergence on 𝒫(𝒳).Let μ,ν∈𝒫(∏_k = 1^N 𝒳_k) where (𝒳_k,d_𝒳_k) are Polish metric spaces. For ℓ, k ∈ℕ, ℓ, k ≤ N, we will write x_ℓ:k to denote the vector x_ℓ,…,x_k and similarly write 𝒳_ℓ:k for the product 𝒳_ℓ×…×𝒳_k.In the arguments below it will be important to keep track of properties of successive disintegrations of probability measures such as in (<ref>).To this end we denote byK^μ_k 𝒳_1:k-1→𝒫(𝒳_k) a regular disintegration of ^1:k_#μ w.r.t. the first k-1-coordinates and remark that K^μ_k is Borel measurable. Further we use the convention that K^μ_1(x_1:0;dx_1) = ^1_#μ(dx_1). With this notation in hand we can rewrite (<ref>) asμ(dx_1:N) = ⊗_k = 1^N K^μ_k(x_1:k-1;dx_k). Recall that μ∈𝒫(^N) triangular regular if μ-almost surely, for k = 1,…,N, K^μ_k takes values in the set of regular measures, that is, K^μ_k(x_1:k-1) ∈{ρ∈𝒫() : ρ is non-atomic} for μ-a.e. x.A coupling π∈Π(μ,ν) is bicausal if for all k ≤ N, the disintegration satisfies for((x_ℓ,y_ℓ)_ℓ = 1^N) ∈∏_k = 1^N 𝒳_k ×𝒳_k π-a.s. K^π_k((x_ℓ,y_ℓ)_ℓ = 1^k - 1)∈Π(K^μ_k(x_1:k-1), K^ν_k(y_1:k-1).The set of bi-causal transport plans in Π(μ,ν) is compact, see e.g. <cit.>.The p-adapted Wasserstein distance between μ and ν is given by_p(μ,ν) := inf_π∈Π_ bc(μ,ν)( ∫ d_𝒳(x,y)^p dπ(x,y) )^1/pp ∈ [1,∞),inf_π∈Π_ bc(μ,ν)∫ d_𝒳(x,y) ∧ 1 dπ(x,y) p = 0.We recall that _0 metrizes the adapted weak topology on 𝒫(𝒳_1:N), see, e.g. <cit.>. The gluing μ⊗̇ν of measures μ∈𝒫(𝒜×ℬ) and ν∈𝒫(ℬ×𝒞) that share a common marginal ^2 μ = ^1 ν, is defined byμ K^ν_1 (A × C) = ∫_A ×ℬ K^ν_1(b;C) dμ(a,b) for all measurable A⊆𝒜, B⊆ℬ. §.§ The quantile processThe next lemma establishes the correspondence between quantile processes and probabilities on ^N.Let μ∈𝒫(ℝ^N). Then we have: * The process Q^μ given by Q^μ_k(u_1:k) := Q^K^μ_k - 1(Q^μ_1:k-1(u_1:k-1))(u_k) satisfies (<ref>) and (<ref>).* If Q∈ L_0(λ^N;ℝ^N) satisfies (<ref>) and (<ref>), then Q^μ = Q for μ = Q_#λ^N.In the proof of Lemma <ref> we need to derive certain measurability properties from the consistency condition <ref>. To this end, we first establish an auxiliary lemma that provides an adequate connection. Even though the assertion of Lemma <ref> is trivial for discrete μ, we have to use some machinery of descriptive set theory in the general case.Let μ∈𝒫(𝒳), f,g 𝒳→𝒴 be measurable such that for all x_1,x_2 ∈𝒳, f(x_1) = f(x_2) implies g(x_1) = g(x_2). Then there exists a Borel measurable function h 𝒴→𝒳 such thatg = g∘ h ∘ f μ-almost surely. Since 𝒳 is Polish and f is measurable, the graph of f is also Borel measurable. By the Jankow-von Neumann uniformization theorem there is an analytically measurable function h̃𝒴→𝒳 with f = f ∘h̃∘ f. Using our assumption on g, we get that g = g ∘h̃∘ f. Replacing h̃ by a Borel measurable function such that f_#(μ)-almost surely h̃ = h, we have found a function with the desired property.The first assertion is obvious from the definition of Q^μ.If Q satisfies (<ref>) and (<ref>), then π := (𝕀,Q)_#λ^N ∈(λ^N,μ). Indeed,let U be uniformly distributed on (0,1)^N, X = Q(U), and fix k ∈{1,…,N}. By Lemma <ref> there is a measurable function h ℝ^k → (0,1)^k such that almost surely X_1:k-1 = Q_1:k-1∘ h (X_1:k-1).Since Q is consistent we haveX_k = Q_k(U) = Q_k(h(X_1:k-1),U_k),whence X_k is (X_1:k-1,U_k)-measurable. Since U_1:k-1 is independent of U_k, this proves thatX_k is conditionally independent of U_1:k-1 given X_1:k-1.On the other hand, since X_1:k-1 is U_1:k-1-measurable, we have independence of U_k and (U_1:k-1,X_1:k-1). To summarize, using these conditional independence properties we have shown that almost surelyK_k^π((U_ℓ, X_ℓ)_ℓ = 1^k-1) = Law( U_k, X_k |(U_ℓ, X_ℓ)_ℓ = 1^k - 1 ) ∈(λ, K^μ_k((X_ℓ)_ℓ = 1^k - 1 )).We conclude that π is bicausal.It was established in <cit.> that Q^μ is the only triangular increasing map that induces a coupling in (λ^N,μ), hence, Q = Q^μ. As direct corollary of Lemma <ref> we obtain Lemma <ref>. From now on, we will denote by Q^μ the map Q ∈ L_0(λ^N;ℝ^N) that is uniquely determined by (<ref>), (<ref>), and Q_# (λ^N) = μ. Let μ∈𝒫(ℝ^N) and Q ∈ L_0(λ^N;ℝ^N) with Q_# (λ^N) = μ. * Q = Q^μ iff Q is triangular increasing and (𝕀,Q)_#λ^N ∈(λ^N,μ).Additionally assume that μ is triangular regular, then * Q = Q^μ iff Q is triangular increasing. The first assertion was shown in <cit.>.To see the second assertion, recall that when ρ∈𝒫(ℝ) is regular, then the quantile function Q^ρ is the unique increasing map that pushes λ to ρ. When μ is triangular regular, then its disintegration K^μ_k consists μ-almost surely of regular measures. Assume that Q is triangular increasing. Then we have for k = 1 that Q_1 pushes λ to K^μ_1, thus, Q_1 = Q^K^μ_1 = Q^μ_1. Given Q_1:k = Q^μ_1:k we can repeatedly apply this reasoning in an induction to obtain Q_k + 1 = Q^μ_k + 1. We recall from Definition <ref> that the Knothe–Rosenblatt coupling of two distributions μ,ν∈𝒫(ℝ^N) is given by _μ,ν = Law(Q^μ,Q^ν).Let μ, ν∈𝒫(ℝ^N) and π∈(μ, ν). Then π is the Knothe–Rosenblatt coupling of μ and ν if and only if π satisfies (<ref>). In particular, _p ≤_p ≤_p.It follows from (<ref>) and the definition of the quantile coupling on the real line that _μ,ν satisfies (<ref>). If π satisfies (<ref>), then the disintegrations K^π_k and K^_μ,ν_k coincide π-almost surely for k = 1,…,N. As the disintegration uniquely determines a coupling, we conclude that π = _μ,ν.§ THE KNOTHE–ROSENBLATT DISTANCE INDUCES THE ADAPTED WEAK TOPOLOGYWe have seen in Lemma <ref> that the _p-topology is coarser than the _p-topology. The main goal of this section is to establish their equivalence on 𝒫_p(ℝ^N). The proof that we give in this section, directly makes use of the concept of modulusof continuity for measures introduced by Eder in <cit.> and quantifies how much _p and _p differ from each other. One step in this argument (in equation (<ref>) below) utilizes the 1-dimensional fact that the composition of monotone couplings is again monotone, and thus optimal for Wasserstein distances. This property breaks down for the generalized Knothe–Rosenblatt distance _p on 𝒫_p((^d)^N), d ≥ 2, introduced in Subsection <ref>. To cover also this extension, we provide a different proof in Appendix <ref> that is based on a characterization of compact sets. Before diving into the proof, we introduce the modulus of continuity of a measure μ∈𝒫_p(𝒜×ℬ) byω_μ(δ) := sup{𝔼[ d_ℬ(Y,Ỹ)^p ]^1/p : (X,Y), (X̃, Ỹ) ∼μ,𝔼[d_𝒜(X,X̃)^p]^1/p≤δ}for δ≥ 0.Formally, this definition depends on p but we suppress this dependency.It is clear from the definitionthat ω_μ is increasing and it is also straightforward to establishthat ω_μ(0)=0 if and only if μ is concentrated on the graph of a measurable function f 𝒜→ℬ. A strengthening of this, which is crucial for our purposes, was shown in <cit.>: There is a measurable function f 𝒜→ℬsuch that μ is supported on the graph of f if and only if lim_δ→ 0ω_μ(δ)=0.Let μ,ν∈𝒫_p(ℝ^N). We give the proof for p ≥ 1 first and show that_p(μ,ν) ≤_p(μ,ν) ≤∑_k = 1^N f_μ^k( _p(μ,ν) ),where f_μ^kℝ^+ →ℝ^+ is a continuous function that vanishes at 0. The first inequality follows directly from Lemma <ref>. For δ≥ 0 and k ∈{2 ,…,N}, we setω^k_μ(δ) := ω_ (𝕀, K^μ_k)_# ( ^1:k-1_#μ)(δ).To see the second inequality, we fix k and consider the random variables X = Q^μ(U), Y = Q^ν(U) where U ∼λ^N. Pick X̃ such that (X̃, Y) ∼π∈Π_ bc(μ,ν) and 𝔼[X̃_1:k - Y_1:k^p] = _p(μ_1:k,ν_1:k)^p. Then we have𝔼[ |X_k - Y_k|^p ]^1/p = 𝔼[ 𝒲_p^p ( K_k^μ(X_1:k-1), K_k^ν(Y_1:k-1) ) ]^1/p≤𝔼[ 𝒲_p^p ( K_k^μ(X_1:k-1), K_k^μ(X̃_1:k-1) ) ]^1/p + 𝔼[ 𝒲_p^p ( K_k^μ(X̃_1:k-1), K_k^ν(Y_1:k-1) ) ]^1/p≤ω_μ^k( 𝔼[ | X_1:k-1 - X̃_1:k-1|^p_p ]^1/p) + 𝔼[ |X̃_k - Y_k|^p ]^1/p≤ω_μ^k( 𝔼[ | X_1:k-1 - Y_1:k-1|^p_p ]^1/p + _p(μ,ν) ) + _p(μ,ν),where we used in (<ref>) the fact that the monotone rearrangement is 𝒲_p-optimal for p ≥ 1, followed by the Minkowski's inequality in (<ref>), then the definition of ω_μ^k combined with optimality of π, and finally in (<ref>) the basic observation that _p(μ_1:k,ν_1:k) ≤_p(μ,ν) as well as the Minkowski's inequality. We define inductively for k = 2,…,N and δ≥ 0f_μ^1(δ) := δ andf_μ^k(δ) := ω_μ^k(δ+ ∑_ℓ = 1^k-1 f_μ^ℓ(δ)) + δ.It follows from <cit.> that (f_μ^k)_k = 1^N is a family of continuous functions that vanish at 0. Substituting with (<ref>) in (<ref>) leads us to𝔼[|X_k-Y_k|^p]^1/p≤ f_μ^k(_p(μ,ν)).Employing firstsub-additivity of x↦ x^1/p and then (<ref>) we find_p(μ,ν)=𝔼 [|X - Y|_p^p ]^1/p≤∑_k=1^N 𝔼[ |X_k - Y_k|^p ]^1/p≤∑_k = 1^Nf_μ^k( _p(μ,ν) ),which yields (<ref>).Let (μ^n)_n ∈ℕ be a sequence in 𝒫_p(ℝ^N) and μ∈𝒫_p(ℝ^N). Hence, we derive from (<ref>) that μ^n →μ in _p if and only if μ^n →μ in _p, since f^k_μ vanishes at 0. This concludes the proof for p ≥ 1.When p = 0, the equality in (<ref>) may fail. Again, let (μ^n)_n ∈ℕ be a sequence in 𝒫(ℝ^N) with _0-limit μ∈𝒫(ℝ^N). Write π^n ∈(μ^n,μ) for an _0-optimal coupling. Since tanh→ (0,1) is a continuous bijection, we have that T(x) := (tanh(x_1),…,tanh(x_N)) is an adapted bijection with adapted inverse. By <cit.> we have that couplings induced by adapted bijections with adapted inverse are bicausal. This means that (𝕀,T)_#μ∈(μ,T_#μ) and (𝕀,T)_#μ^n ∈(μ,T_#μ), from which follows that T_#π^n ∈(T_#μ^n, T_#μ) (as T_#π^n can be written as a gluing of bicausal coupling). Moreover, we havelim_n →∞∫ |T(x) - T(y)|_1 dπ^n(x,y) ≤lim_n →∞ 2 N ∫ |x - y| ∧ 1 dπ^n(x,y) = 0,since |tanh(x̂) - tanh(ŷ)| ≤ |x̂-ŷ|∧ 2 for x̂,ŷ∈. This shows that T_#μ^n → T_#μ in _1, thus, also in _1. In terms of quantile processes, we have T∘ Q^μ^n→ T ∘ Q^μ in λ^N-probability, whence, Q^μ^n→ Q^μ in λ^N-probability. We have shown that μ^n →μ in _0. Since _0 ≤_0, the reverse direction is trivial, which completes the proof. Let p≥ 1 and write _p,L(^N) for the set of μ∈_p,L(^N) which are triangular L-Lipschitz in the sense that K^μ_k ^k-1→𝒫_p()is L-Lipschitz w.r.t. p-Wasserstein distance on 𝒫_p(). Let p≥ 1 and L>0. On _p,L(^N) the adapted Wasserstein distance and Knothe–Rosenblatt distance are equivalent. That is, there exists a constant C>0such that for all μ, ν∈_p,L(^N) _p(μ, ν)≤_p(μ, ν) ≤ C ·_p(μ, ν).From the definition of the modulus of continuity it is clear that (using the notation ω_μ^k from (<ref>) )ω_μ^k(δ) ≤ L δ.Then claim then follows in view of (<ref>) and (<ref>). The statement in <cit.> asserts that when a sequence of absolutely continuous measures μ_n∈𝒫(^N), n ∈, converges toμ in total variation, then the corresponding quantile processes Q^μ_n→ Q^μ in probability. As a consequence of Theorem <ref> we obtain the following sharp result: Let μ_n, μ∈𝒫(ℝ^N), n ∈. Then μ_n →μ in _p to μ if and only if Q^μ_n→ Q^μ in L_p(λ^N;ℝ^N). In particular, this is the case when μ_n →μ in total variation and ∫ |x|_p^p dμ_n(x) →∫ |x|_p^p dμ(x).The first assertion follows directly from Theorem <ref>. To see the second assertion, note that by <cit.> TV(μ,μ_n) ≤ AV(μ,μ_n) = inf_π∈(μ,μ_n) π({(x,y) ∈^N ×^N : x ≠ y }) ≤ (2^N-1 - 1) TV(μ,μ_n),i.e., convergence in total variation TV already entails convergence w.r.t. the so-called adapted variation AV.In turn, the adapted variation distance dominates the _0-metric, which yields convergence of μ_n →μ w.r.t. _0. Thus, we have by the first part that Q^μ_n→ Q^μ in λ^N-probability. Hence, by convergence of their p-th moments, we find that Q^μ_n→ Q^μ in L_p(λ^N;ℝ^N). § CONSEQUENCES OF THE ISOMETRY WITH L_P(Λ^N; ^N)In this section we collect consequences of the isometryι𝒫_p(ℝ^N) → L_p(λ^N;ℝ^N) μ↦ Q^μ(which we already mentioned in (<ref>)). It represents a very useful tool for the investigation of the Knothe–Rosenblatt distance as it allows to transfer properties from (convex subsets of) L_p(λ^N;ℝ^N) to 𝒫_p(ℝ^N). As a first example, through the isometry ι it is evident that _p satisfies the triangular inequality, a fact that is otherwise rather tedious to prove. §.§ Triangular monotonicity Towards a better understanding of the metric completion of 𝒫_p(ℝ^N) equipped with _p, we study the notion of triangular monotonicity of adapted functions Q(0,1)^N →ℝ^N. Recall that Q is calledif there exists a λ^N-full set Γ⊆ (0,1)^N such that for all u,v ∈Γ and k ∈ℕ, k ≤ N, we haveu_1:k-1 = v_1:k-1 and u_k ≤ v_kQ_k(u_1:k) ≤ Q_k(v_1:k).If (<ref>) remains additionally true when both ≤ are replaced by <, then Q is called strictly . Further, recall from the introduction that a measure μ∈𝒫(ℝ^N) is calledif for μ-a.e. x and k ∈ℕ, k ≤ N,K^μ_k(x_1,…,x_k - 1)is regular (i.e., is atom free).In the following we establish that the set L_p^ SIT(λ^N;^N)of strictlyfunctions is closely related to the set ofprobability measures.The following statements hold: * The set of increasing triangular transformations L_p^ IT(λ^N;^N) is a closed subset of L_p(λ^N;ℝ^N);* The set of strictly increasing triangular transformations L_p^ SIT(λ^N;^N) is dense in L_p^ IT(λ^N;^N). Clearly, the property of beingis closed under L_p-convergence, which shows the first claim. On the other hand, if ϵ > 0, Q is , so isQ^ϵ := Q + ϵ 𝕀.Even more, Q^ϵ inherits from the identity map the property of being strictly . Since Q^ϵ→ Q for ϵ→ 0 in L_p, we deduce the second assertion. Let μ∈𝒫(ℝ^N) be . Then Q^μ∈ L_p^ SIT(λ^N;^N) and {Q^μ} = { Q ∈ L^ IT(λ^N;^N) : Q_#λ^N = μ}.Assume that Q is amap with Q_#λ^N = μ. Let U = (U_1,…,U_N) ∼λ^N and X = Q(U). Clearly, for k = 1 we have that Q_1 = Q^μ_1. So, let us assume that Q_1:k-1 = Q^μ_1:k-1 λ^N-almost surely. As Q^μ is strictly , this means that σ(U_1:k-1) = σ(Q^μ_1:k-1(U)) = σ(Q_1:k-1(U)) = σ(X_1:k-1), whence,K^μ_k(X_1:k-1) = Law( X_k | X_1:k-1) = Law( X_k | U_1:k-1).Since Q isand μ , we find that Q_k(U_1:k) = Q^K^μ_k(X_1:k-1)(U_k) = Q^μ(U_1:k) almost surely. We conclude that Q = Q^μ λ^N-almost surely.The isometryι𝒫(ℝ^N) → L(λ^N;ℝ^N) μ↦ Q^μ. has the following properties:*ι( {μ∈𝒫(ℝ^N) : μ is } ) = L_p^ SIT(λ^N;^N);*Via the isometry ι, the metric completion of (𝒫_p(ℝ^N), _p) is L_p^ IT(λ^N;^N);*Restricted to L_p^ SIT(λ^N;^N), the map Q ↦ Q_#λ^N is the inverse of ι.<ref>: On the one hand, if μ is regular, then we have by Lemma <ref> that Q^μ∈ L^ SIT_p(λ^N;^N). On the other hand, if Q ∈ L^ SIT_p(λ^N;^N), then it is straightforward to verify that Q admits an adapted, almost sure right-inverse, i.e., an adapted map R ℝ^N → (0,1)^N with R ∘ Q = 𝕀 λ^N-almost surely. Thus, the induced Monge coupling satisfies (𝕀,Q)_#λ^N = (R,𝕀)_#μ (where μ := Q_#λ^N). In <cit.> it was shown that couplings induced by adapted bijections with adapted inverse, are bicausal, whence, (𝕀,Q)_#λ^N ∈(λ^N,μ). But, in <cit.> it was established that Q^μ is the unique element of L^ IT_p(λ^N;^N) with (𝕀,Q^μ)_#λ^N ∈(λ^N;μ). Hence, Q = Q^μ, which also proves <ref>.<ref>: Since for every μ∈𝒫(ℝ^N), the quantile process is , the assertion is a consequence of Proposition <ref>. It is easy to see that the set of increasing triangular transformations is closed, but, at least to the authors, it is not immediate whether or not L^quantile_p(λ^N;^N) = { Q^μ : μ∈𝒫_p(ℝ^N) } is G_δ. This question is positively answered in the subsequent corollary. It is somewhat remarkable that by a small detour in the reasoning this question becomes much easier to answer. The set L^ quantile_p(λ^N;^N) is a G_δ-subset of L_p(λ^N;ℝ^N).The set L^ quantile_p(λ^N;^N) is homeomorphic to 𝒫_p(ℝ^N) when the latter is endowed with the p-th Knothe–Rosenblatt topology.By Theorem <ref>, the space 𝒫_p(ℝ^N) equipped with the p-adapted Wasserstein topology is Polish. Since the p-Knothe–Rosenblatt topology and the p-adapted Wasserstein topology coincide (cf. Theorem <ref>), we havethat also { Q^μ : μ∈𝒫_p(ℝ^N) } is a Polish space. Hence, we conclude by recalling that a subset of a Polish space endowed with the trace topology is itself Polish if and only if it is G_δ, see <cit.>. §.§ Geodesics This section is concerned with providing the missing bits to complete Corollary <ref>. Based on the isometry (<ref>) which was shown in <Ref>, it is sufficient to establish the following lemma: The sets L_p^ SIT(λ^N;^N), L_p^ quantile(λ^N;^N), L_p^ IT(λ^N;^N)are convex.The only non-trivial claim is convexity of L_p^ quantile(λ^N;^N) which we proceed to show.Assume that T,S(0,1)^N →^N satisfy (<ref>) and (<ref>) and let α∈ (0,1). We claim that (1 - α) T + α S also satisfy (<ref>) and (<ref>). Clearly, (1 - α) T + α S satisfies (<ref>), so that it remains to prove (<ref>).To this end, fix k< N and u_1,…, u_N, u_k'∈ (0,1) with u_k < u_k' and assume that( (1 - α) T + α S )_k (u_1,…,u_k)=( (1 - α) T + α S )_k (u_1,…,u_k').If eitherT(u_1, …, u_k) < T(u_1, …, u_k') or S(u_1, …, u_k) < S(u_1, …, u_k'),then also ( (1- α) T+ α S)(u_1, …, u_k) < ( (1-α) T+ α S)(u_1, …, u_k'),which contradicts (<ref>). Therefore, (<ref>) implies thatT(u_1, …, u_k) = T(u_1, …, u_k')and S(u_1, …, u_k) = S(u_1, …, u_k').Under (<ref>) consistency of S and T yields((1-α) T+ α S)(u_1, …, u_k, …, u_m) = ((1-α) T+ α S)(u_1, …, u_k', …, u_m).We have shown that (<ref>) holds for (1-α) T+ α S, which concludes the proof. §.§ On the preservation of probabilistic properties along geodesicsAssume that p∈ (1, ∞). Then _p geodesics are unique as consequence of the isometry (<ref>). Every such geodesic (μ_t)_t∈ [0,1] is given by the McCann-interpolation induced by a Knothe–Rosenblatt coupling π∈(μ_0, μ_1), i.e. μ_t=((1-t) ^1 + t ^2)_#π.The sets of martingales and predictable processes are closed under interpolations of the form (<ref>) for bicausal π, see e.g. <cit.>. As in the case of adapted Wasserstein geodesics, Knothe–Rosenblatt geodesics do not preserve the Markov property. Let μ_0 = 1/2(δ_(1,1,0) + δ_(0,0,1)) and μ_1= 1/2 (δ_(1,0,0) + δ_(0,1,1)). Then we haveQ^μ(u)=1_(0,1/2)(u_1) (0,0,1) +1_[1/2,1)(u_1)(1,1,0), Q^ν(u)=1_(0,1/2)(u_1) (0,1,1) +1_[1/2,1)(u_1)(1,0,0).The quantile process Q corresponding to the midpoint μ_1/2 of μ and ν is given byQ(u) = 1/2 (Q^μ(u) + Q^ν(u)) =1_(0,1/2)(u_1) (0, 1/2,1) +1_[1/2,1)(u_1)(1,1/2,0).The law μ_1/2 is given by 1/2(δ_(0,1/2,1) + δ_(1,1/2,0)), which is clearly the law of a non-Markovian process. §.§ Comparison of theand the-completion Although _p and _p are topologically equivalent, rather unsurprisingly their metric completions do not coincide.Recall that up to isometry, the completion of a metric space consists of all Cauchy sequences (where two Cauchy-sequences are identified if mixing them yields again a Cauchy-sequence). In this section we will show that the completions w.r.t. _p and _p are mutually non comparable by establishing the following two Propositions:Let p∈{0}∪ [1,∞) and N ≥ 2. There is an _p-Cauchy sequence μ_n∈_p(^N), n≥ 1 which contains no _p-Cauchy sequence, i.e. inf{_p(μ_k, μ_n): k≠ n} >0.Intuitively speaking, Proposition <ref> asserts that the _p-completion has a point which is not in the _p-completion. In particular,_p(^N)^_p⊈_p(^N)^_p.Rigorously this means that there is no continuous mapping from _p(^N)^_p onto _p(^N)^_p which is the identity on _p(^N). Let p∈{0}∪ [1,∞) and N ≥ 2. There is an _p-Cauchy sequence μ_n∈_p(^N), n≥ 1 which is not a _p-Cauchy sequence but contains but contains two _p-Cauchy sequences. Again on an intuitive level, Proposition <ref> asserts thatthere are points p,p' in the _p-completion which are identified in the _p-completion. In particular_p(^N)^_p⊈_p(^N)^_p .Rigorously this means that there is no continuous mapping from _p(^N)^_p onto _p(^N)^_p which is the identity on _p(^N). Without loss of generality we show the assertion for N = 2 and remark that the general statement (for N ≥ 2) simply follows by adequately embedding. Fix n∈. For u∈ (0,1) write d_n(u) for the n-th digit in the binary digit expansion of u. Define the quantile process Q^n by Q^n_1(u_1)=u_1, Q^n_2 (u_1, u_2)= d_n(u_1) and let μ_n:= Q^n_#(λ^2). Then we have for p≥ 1_p(μ^n,μ^m) = 𝔼[|X^n - X^m|_p^p]^1/p = ℙ[ X^n ≠ X^m]^1/p=0 n = m,( 1/2)^1/pn ≠ m.and similarly for p=0. However, it is easy to see that (μ_n)_n ∈ constitutes an _p-Cauchy sequence. Again, we can assume without loss of generality that N = 2. We define the quantile processes Q^n(u) = (Q^1(u_1),Q^2(u_1,u_2)) via Q^n_1(u_1) := sgn(u_1-1/2)/nand Q^n_2(u_1,u_2) := (-1)^n sgn(u_1-1/2),where u = (u_1,u_2) ∈ (0,1)^2. Since Q^n satisfies (<ref>) and (<ref>) we have by Lemma <ref> that Q^n ∈ L_p^ quantile(λ^2;^2). We write μ_n = Q^n_# (λ^2). It is then straightforward to see that and * (μ_n)_n ∈ constitute an _p-Cauchy sequence;* (μ_2n)_n ∈ and (μ_2n+1)_n ∈ are both _p-Cauchy sequences;* (μ_n)_n ∈ is not a _p-Cauchy sequence.Hence, we have found a sequence with the desired properties.§ KNOTHE–ROSENBLATT IN MULTIPLE DIMENSIONS§.§ On the weak topology and convergence in probabilityWe denote by J 𝒫(𝒜×ℬ) →𝒫(𝒜×𝒫(ℬ)) the (measurable) map satisfying J(μ) = ( (a,b) ↦ (a,K^μ_2(a)) )_#μ. The set of all probabilities μ concentrated on the graph of a measurable function is denoted by ℱ_p(𝒜ℬ). The set ℱ_p(𝒜ℬ) is a G_δ-subset of 𝒫_p(𝒜×ℬ) for p∈{0}∪ [1,∞).As in Section <ref> we appeal to <cit.> which asserts that a measure μ∈𝒫_p(𝒜×ℬ) is in ℱ_p(𝒜ℬ) if and only if its modulus of continuitysatisfies lim_δ↘ 0ω_μ(δ) = 0. Hence, we haveℱ_p(𝒜ℬ)= ⋂_ϵ > 0⋃_δ > 0{μ∈𝒫_p(𝒜×ℬ) : ω_μ(δ) < ϵ}. <cit.> establishes that for fixed δ > 0, the map μ↦ω_μ(δ) is continuous on 𝒫_p(𝒜×ℬ). We thus conclude that ℱ_p(𝒜ℬ) is the countable intersection of open sets, that is, G_δ. Recall that 𝒳 = 𝒳_1:N = ∏_k = 1^N 𝒳_k is the product of N Polish metric spaces (𝒳_k, d_𝒳_k) and that _p denotes the p-adapted Wasserstein distance as defined in (<ref>).The space (𝒫_p(𝒳), _p) is Polish for p∈{0}∪ [1,∞).We show the claim by induction over N. When N = 1 we have that _p=_p, which yields the claim. Next, let N > 1 and assume that 𝒴_N := (𝒫_p(𝒳_2:N), _p) is Polish. To clarify, in this particular instance _p denotes the p-adapted Wasserstein distance for measures on the path space 𝒳̃ := 𝒳̃_1:N-1 where 𝒳̃_k = 𝒳_k + 1, k < N. Consider the map J̃ given byJ̃𝒫_p(𝒳)→𝒫_p(𝒳_1 ×𝒴_N ),μ ↦( x ↦ (x_1, μ^x_1) )_#μ,where (μ^x_1)_x_1 ∈𝒳_1 denotes the regular disintegration of μ w.r.t. the 𝒳_1-coordinate. Clearly, J̃ is injective with range ℱ_p(𝒳_1 𝒴_N). Moreover, it is not hard to see that J̃ is a topological embedding. Indeed,<cit.> asserts that for μ,ν∈𝒫_p(𝒳) we have the isometry_p(μ,ν) = _p(J(μ),J(ν)),where, for ξ,ζ∈𝒫_p(𝒳_1 ×𝒴_N), we have the non-complete metric_p^p(ξ,ζ) := inf_π∈(ξ,ζ)∫ d_𝒳_1^p(x_1,y_1) + _p^p(η, θ)π(d(x_1,η), d(y_1, θ)).It is well-known that the topology induced by _p is equivalent to weak convergence and convergence of p-moments. Therefore, J is an isometry from (𝒫_p(𝒳),_p) to (𝒫_p(𝒳_1 ×𝒴_N),_p) and homeomorphic to ℱ_p(𝒳_1 𝒴_N). The latter set is a G_δ-subset of 𝒫_p(𝒳_1 ×𝒴_N) by Lemma <ref>. Hence, (𝒫_p(𝒳),_p) is a Polish space. Let γ∈𝒫(𝒜) and p∈{0}∪ [1,∞). The mapping L_p(γ; ℬ) →{μ∈ F_p(𝒜ℬ) : ^1_#μ = γ} : f ↦ (𝕀,f)_#γ,is a homeomorphism.If f^k → f in L_p(γ;ℬ) then (𝕀,f^k)_#γ→ (𝕀,f)_#γ in 𝒫_p(𝒜×ℬ), since g ↦ (𝕀,g)_#γ is 1-Lipschitz from L_p(γ;ℬ) to 𝒫_p(𝒜×ℬ). The reverse direction was shown in <cit.>. On the real line it is well-known that the map ρ↦ Q^ρ is continuous with domain 𝒫() and co-domain L^0(0,1). To cover also the multi-dimensional Knothe–Rosenblatt rearrangement givenin <Ref>, cf. <Ref>, we a give proof of the following generalization of this fact.Let p > 1 and S 𝒫_p(ℝ^d) → L^p(λ^N;ℝ^d) be a selector of 𝒲_p-optimal transport maps between λ^d and ρ. Then S is continuous.By <cit.> we know that (𝕀,S(ρ))_#λ^d is the unique 𝒲_p-optimal coupling from λ^d to ρ. Hence, we deduce from stability of optimal transport, see e.g. <cit.>, that whenever ρ^k →ρ in 𝒫_p(ℝ^d) we also have (𝕀,S(ρ^k))_#λ^d → (𝕀,S(ρ))_#λ^d in 𝒫_p(ℝ^d ×ℝ^d). Then the assertion follows from Lemma <ref>. §.§ On compact sets in the Knothe–Rosenblatt topologyIn this section we provide an alternative proof of <Ref> which is also applicable in the multi-dimensional setting introduced in <Ref>. We begin by providing a characterization of relatively compact sets: Let 𝒦⊆𝒫_p(ℝ^N), p∈{0}∪ [1,∞).Then the following are equivalent: *𝒦 is _p-relatively compact; *𝒦 is _p-relatively compact. By <Ref> we have _p ≥_p ≥_p, so that <ref><ref>.To see the reverse implication, let (μ_n)_n ∈ be an _p-convergent sequence with limit μ∈𝒫_p(^N). Write X^n = Q^μ_n(U), n ∈, and X = Q^μ(U) where U is uniformly distributed on (0,1)^N. It suffices to show that X^n → X in probability.We proceed to show this claim byinduction: For k = 1 we have that X^n_1 = Q^K^μ_n_1(U_1) → Q^K^μ_1(U_1) = X_1 almost surely, since μ_n →μ in _p and therefore K^μ_n_1 = ^1_#μ_n →^1_#μ = K^μ_1 in 𝒫_p(). Next, assume that X^n_1:k-1→ X_1:k-1 in probability. It was established in <cit.> that the _p-topology can be characterized as the initial topology w.r.t. the family of maps𝒫_p(^N) →𝒫_p(^k-1×𝒫_p()) μ↦ (x ↦ (x_1:k-1, K_k^μ(x_1:k-1))),for k = 1, …, N. Hence, μ^n →μ in _p entails convergence of(X^n_1:k-1, Law(X^n_k | X^n_1:k-1)) → (X_1:k-1, Law(X_k | X_1:k-1)) in law.In particular, we get by independence of U_k that(X^n_1:k-1,U_k,Law(X^n_k| X^n_1:k-1) → (X_1:k-1, U_k, Law(X^n_k | X_1:k-1)) in law.Recall that for a sequence (ρ_n)_n ∈ in 𝒫() with ρ_n →ρ in the p-weak topology, we have that Q^ρ_n(U_k) → Q^ρ(U_k) w.r.t. ._p. Therefore, we have thatπ_n := Law(X^n_1:k-1,U_k,X^n_k ) → Law(X_1:k-1,U_k,X_k) =: π,in 𝒫_p(^1:k-1× (0,1) ×), where we used that X^n_k = Q^Law(X^n_k | X_1:k-1)(U_k) and X_k = Q^Law(X_k | X_1:k-1)(U_k). At the same time, we have by the inductive assumption that γ_n := Law(U_1:k-1, X_1:k-1^n), n ∈, converges to Law(U_1:k-1, X_1:k-1) =: γ in 𝒫_p((0,1)^k-1×^k-1). Observe that π∈ℱ_p(^k-1× (0,1) ). Thus, by continuity of gluing at (γ,π), see <cit.>, we have thatγ^n ⊗ K^π^n_1 →γ⊗ K^π_1in 𝒫_p((0,1)^k-1×ℝ^k-1× (0,1) ×ℝ),where we view (x_1:k-1,u_k) ∈ℝ^k-1× (0,1) as the first coordinate of both, π^n and π. Put differently, we have shown that in distribution(U_1:k,X^n_1:k) → (U_1:k,X_1:k).Therefore, we can invoke Lemma <ref> to obtain that X^n_1:k→ X_1:k in probability, which concludes the induction step.By Proposition <ref> we have that a set 𝒦⊆𝒫_p(ℝ^N) is _p-relatively compact if and only if it is _p-relatively compact. Since the topologies induced by _p and _p are both sequential as well as Hausdorff, this means that the topologies coincide.§.§ Guide to the case d>1. In this section, we elaborate on the claims of Section <ref>. We omit giving detailed proofs as most are analogue to the case d=1 with the obvious modifications. Nevertheless, to resolve possible doubts we included a sketch of the arguments below. Recall Definition <ref> given in Section <ref>. Further, recall that by Corollary <ref> the unique selector S _p(^d) → L_p(λ^d;^d) where S(ρ) is the _p-optimal transport map from λ^d to ρ, is continuous. Analogous to Lemma <ref> we introduceQ^μ_k(u_1:k) := S^K^μ_k(Q^μ_1:k-1(u_1:k-1),and find that Q^μ satisfies (<ref>) and (<ref>). Reasoning as in the proof of Lemma <ref>, it follows that Q ∈ L_p((λ^d)^N;(^d)^N) satisfies(<ref>), (<ref>), and Q_# (λ^d)^N = μ if and only if Q^μ = Q. We conclude that for each μ there is a unique quantile process and μ↦ Q^μ is an isometry from (_p((^d)^N),_p) to (L_p((λ^d)^N;(^d)^N),._p). For μ,ν∈_p((^d)^N) the multi-dimensional Knothe–Rosenblatt coupling _μ,ν is given by (Q^μ,Q^ν)_# (λ^d)^N. Therefore, the successive disintegration of _μ,ν is _μ,ν-almost surely given byK^_μ,ν_k((x_ℓ,y_ℓ)_ℓ = 1^k-1)= Law ( S(K^μ_k(x_1:k-1)(U), S(K^ν_k(y_1:k-1)(U)) | ((x_ℓ,y_ℓ)_ℓ = 1^k-1)),where U is a uniform distribution on (0,1)^d. It follows directly from this representation that _μ,ν∈(μ,ν) and in particular _p ≥_p.Finally, the proof of Proposition <ref> carries over to the multidimensional setting thanks to Corollary <ref>. Then the same reasoning as in the second proof of Theorem <ref> applies.The first proof of Theorem <ref> directly carries over. Indeed, note that (<ref>) holds by definition of _p while all other steps did not use one dimensional ingredients. §.§ On _pLet p∈ [1, ∞) and recall the definition of _p from (<ref>). Theorem <ref> asserts that _p(μ_n, μ)→ 0 if and only if _p(μ_n, μ)→ 0. This follows from the same argument we used for the case d=1 in Section <ref>.However, as the next lemma shows, _p is just a semimetric but not a metric, i.e., the triangle inequality is not satisfied in general.Let d, N ∈, d,N > 1, and p ≥ 1. Then the semimetric _p on 𝒫_p((^d)^N) does not satisfy the triangle inequality. We prove this by providing a counterexample for d=2 and N = 2, and remark that the counterexample can be easily embedded into higher-dimensional spaces. Let ϵ > 0, and define the probability measures μ,ν,η∈𝒫_p(ℝ^4) asμ:= 1/2(δ_((-1,0),(2,0)) +δ_((1,0),(-2,0))), ν:= 1/2(δ_((-ε,1),(2,0)) + δ_((ε,-1),(-2,0))), η:= 1/2(δ_((ε,1),(2,0)) + δ_((-ε,-1),(-2,0))).These measures are visualised in Figures <ref> and <ref>. Figure <ref> shows their support projected onto the first two coordinates. The labels of the points signify the second set of coordinates. Between any of these measures there exists exactly one unique triangular optimal coupling, cf. (<ref>). These couplings are of Monge-type and illustrated in Figure <ref>.We write T_μν for the optimal map from μ to ν.The red arrows indicate how the mass is moved. By construction the coupling transports the first ℝ^2-coordinates optimally and thus only tries to minimize the movement of the points in the plane, ignoring the second set of ℝ^2-coordinates.Then the transport in the second dimension is predetermined by the transport in the plane since the conditional measures are all Diracs.We haveT_μη ((-1,0),(2,0))=((-ε,-1),(-2,0)), T_μη ((1,0),(-2,0))=((ε,1),(2,0)),and get_p(μ,η) = (|1-ε|^p+ 1 + 4^p)^1/p≥ 4.Furthermore, we haveT_μν ((-1,0),(2,0))=((-ε,1),(2,0)), T_μν ((1,0),(-2,0))=((ε,-1),(-2,0)),which implies that_p(μ,ν)= (|1-ε|^p+1)^1/p.Finally, we haveT_νη ((-ε,1),(2,0))=((ε,1),(2,0)), T_νη ((ε,-1),(-2,0))=(-ε,-1),(-2,0)),whence,_p(ν,η)=2ε.Thus, when ϵ≤ 1 we find_p(μ,ν) + _p(ν,η)=((1-ε)^p+1)^1/p+2ε<4<_p(μ,η),which violates the triangle inequality.plain | http://arxiv.org/abs/2312.16515v1 | {
"authors": [
"Mathias Beiglböck",
"Gudmund Pammer",
"Alexander Posch"
],
"categories": [
"math.PR"
],
"primary_category": "math.PR",
"published": "20231227104700",
"title": "The Knothe-Rosenblatt distance and its induced topology"
} |
The dynamics of strongly interacting particles are governed byYang-Mills (Y-M) theory, which is a natural generalization of Maxwell Electrodynamics (ED). Its quantized version is known as quantum chromodynamics (QCD) <cit.> and has been very well studied. Classical Y-M theory is proving to be equally interesting because of the central role it plays in describing the physics of quark-gluon plasma (QGP) — which was prevalent in the early universe and is also produced in relativistic heavy ion collision experiments. This calls for a systematic study of classical Y-M theories. A good insight into classical Y-M dynamics would be best obtained by comparing and contrasting the Y-M results with their EDcounterparts. In this article, a beginning has been made by considering streaming instabilities in Y-M fluids.We find that in addition to analogues of ED instabilities,novel nonabelian modes arise, reflecting the inherent nonabelian nature of the interaction. The new modes exhibit propagation/ growth,with growth rates that can be larger than what we find in ED. Interestingly, we also find a mode that propagates without getting affected by the medium. : A Dataset and Model for Advancing Robust Semantic Segmentation in Construction Environments Maghsood Salimi School of Innovation, Design and Engineering, Mälardalen University, Sweden Mohammad Loni Future Solutions Department, Volvo Construction Equipment, Sweden Sara AfsharFuture Solutions Department, Volvo Construction Equipment, Sweden Marjan SirjaniSchool of Innovation, Design and Engineering, Mälardalen University, Sweden Antonio CicchettiSchool of Innovation, Design and Engineering, Mälardalen University, Sweden ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONIt is well known that four fundamental interactions essentially govern the dynamics of the universe. Among them, electrodynamics is uniquely placed: the theories of the remaining three interactions are modelled by generalising its central property, viz, gauge invariance. This generalization is quite nontrivial and leads to many novel features such as self-interaction and inherent nonlinearity. Given this,it is natural to enquire if phases analogous to that of ED plasma exist for these interactions.They are ruled out for gravity because it is purely attractive, while it is well nigh impossible to explore them in weak interactions[ as indicated by the very name,they are orders of magnitude weaker than electrodynamics (which is itself a hundred times weaker than the strong force) and are also even more short ranged than strong interactions.]. We are thus left with only one candidate, viz., strong interaction for our study. It turns out that it does furnish the analogue of ED plasma.Quark Gluon Plasma (QGP) <cit.> is the exact counterpart of electrodynamic plasma.Recall that hadrons such as protons, neutrons, and mesons are bound states of quarks or of quark-antiquarks, QGP is their deconfined phase[There is, however, one fundamental difference. While particles carrying electric charges are experimentally observed, free quarks and gluons that carry the strong charge are not been detected directly in experiments. Thus, deconfinement is similar to metal-insulator transition where the electron states get delocalised within the material, but are still bound to the lattice as a whole.].The early universe, just before hadronization, was in this phase.More pertinently for observations, the QGP is also produced in relativistic heavy-ion collision experiments <cit.> in accelerators, and has been a subject of much study.It is of importance to us that there is good evidence that many of the QGP properties, especially those involving flow,can be described in the classical language <cit.>. In this context, an ab initio study of streaming instabilities could be a useful starting point for studying Y-M dynamics. There is a large body of earlier works where classical Y-M theory has been studied in the context of QGP <cit.>. Some examples are thermalization of QGP, hydrodynamic evolution and Debye screening <cit.> in an expanding plasma by employing a Vlasov description, Weibel-like instabilities in nucleon-nucleon collisions and so forth <cit.>. Almost all of them are tailor-made to address specific issues pertaining to heavy ion collisions, and they borrow heavily from QCD(or QCD-inspired theories)and phenomenology.They are also largely restricted to studying direct ED analogues such as screening lengths and modes which are insensitive to the intrinsic nonabelian nature of the interaction.In contrast, we propose to initiate an ab initio study of streaming instability in Y-M fluids. As already mentioned, we expect them to be of great relevance to heavy ion collisions. Yet another motivation, though rather academic is that ED streaming instabilities <cit.> have received a new fillip in laser interaction with overdense plasmas <cit.>. High-intensity lasers would introduce non-linearity while Y-M theories are intrinsically nonlinear. A comparison between the two nonlinearities should also be of interest. The theory governing quark-gluon interactions is what is known as an SU(3) gauge theory. Its classical version would be a theory described in a phase space which is extended to include an additional eight-dimensional internal space. The charges – called colour, are vectors in the internal space. To avoid unnecessary group-theoretic complications, we study its simpler version – an SU(2) gauge theory. Here the internal space is only three-dimensional because of which it is intuitively simpler to visualise the nonabelian features of the theory. We perform the analysis entirely in the linear response regime. Yet, we see the emergence of new modes which have no ED analogues, and whose dispersion relations are qualitatively different.The manuscript has been organized into eight sections. We have deliberately chosen to provide a detailed description of the Y-M system and have provided comparisons with the ED plasma wherever it is pertinent to cater to both the plasma and high-energy physics community. For this purpose in section <ref>, we introduce the governing equations of the classical Yang-Mills fields. The dynamical equations are adapted to a fluid depiction for the evolution of momenta and colour charge. The equations are, of course, dimensionally consistent using which, in section <ref>, we identify normalizations (convenient units) and relevant scales.Armed with this we introduce, in section <ref>, two counter-streaming YM fluids in equilibrium: the total colour charge densities and momentum are zero. We then perform a linear perturbation analysis. As mentioned, we find that in addition to the conventional ED-like plasma modes, a novel set of growing modes arise, entirely owing to the nonabelian nature of the dynamics. There is yet another propagating mode which is completely blind to the plasma. In sections <ref>,<ref> and <ref>, we analyse the results in detail and identify regimes in which the ED-like and the novel modes dominate. These regimes would help us in experimentally verifying Y-M predictions.§ CLASSICAL YANG-MILLS THEORYYang-Mills theory <cit.> generalizesMaxwell's theory in two interrelated ways. First, the charge (called colour)is a vector in an internal space and acts as a dynamical variable. Similarly, the colour electric and magnetic fields are also vectors in the internal space. Secondly, gauge transformations, apart from acting on the gauge potentials, also act on the vectors in internal space. In our case, they are just rotations. We say that they transform covariantly under gauge transformations. Since three-dimensional rotations do not commute, Y-M theory is called nonabelian. As a consequence of the nonabelian dynamics, the fields themselves carry charge and become self-interacting which, in turn,makes the theory nonlinear. Thus charge conservation should take the flow of charge between the field and matter into account. Finally, unlike in ED,gauge potentials are a necessity for the very formulation of the theory. To make these notions clearer, and for the sake of pedagogy, we first present a brief review of YM equations in subsection <ref>. As mentioned, they are written for the SU(2) gauge theory. §.§ YM dynamics: a brief reviewYang-Mills equations for SU(2) gauge theory are givenby, ∂_i E⃗_i - g A⃗_i ×E⃗_i=4πρ⃗ ϵ_ijk∂_jB⃗_k - g ϵ_ijkA⃗ _j×B⃗_k -1/c∂_0 E⃗_i -gϕ⃗×E⃗_i=4π/cJ⃗_i∂_i B⃗_i -gA⃗_i ×B⃗_i=0ϵ_ijk∂_jE⃗_k - g ϵ_ijkA⃗ _j×E⃗_k +1/c∂_0 B⃗_i + gϕ⃗×B⃗_i= 0 Eqs (<ref>)-(<ref>) merit some explanation. Note the occurrence of both the vector sign and Latin indices (as subscripts). The former indicates that the quantities have three components in the internal colour space. The Latin indices merely represent spatial components. Also note that the gauge potentials ϕ⃗, A⃗_i (which generalise the usual scalar and vector potentials) appear explicitly in the equations. This is completely unlike what we find in Maxwell's equations. As usual, derivatives with respect to space and time variables are denoted by ∂_i = ∂/∂ x_i and ∂_0 = ∂/∂ t respectively. A la ED, the colour charge and current densities are denoted by ρ⃗ and J⃗_i. Similarly, E⃗_i and B⃗_i represent the colour electric and magnetic fields. Finally, note the appearance of the coupling constant g which is the measure of YM dynamics. If we were to set g=0, the equations would collapse to a triplet of uncoupled Maxwell fields.Equations (<ref>)-(<ref>) get completed by specifying how the gauge potentials generate the fields. They are given by, E⃗_i = -∂_i ϕ⃗ - 1/c∂_0 A⃗_i + g A⃗_i ×ϕ⃗ B⃗_i=ϵ_ijk∂_j A⃗_k- g ϵ_ijkA⃗_j ×A⃗_k Once we employ these definitions, Eqs (<ref>)-(<ref>) become mathematical identities.Y-M equations inform us on how the sources generate the fields. To know how the colour charges respond to the fields, equations of motion need to be added. As mentioned, colour is a dynamical variable. Hence, in addition to the counterpart of the Lorentz force, we have an equation that governs the evolution of the charge vector, and is known as the Wong equation. Explicitly, the equations of motion aredP_i/dt = g Q⃗· ( E⃗_i + ϵ_ijkV_j/cB⃗_k) dQ⃗/dt =gc Q⃗×( ϕ⃗ - A⃗_iV_i/c)An analogy is helpful here. The coupling constant g is analogous to the gravitational constant G, and the charge Q⃗ is the counterpart of mass m. Thus, the charge Q⃗ couples to the Y-M fields through g. In the Maxwellian limit, g cannot be determined independently of Q⃗. The electrodynamic limit is, therefore, consistently obtained by taking the limits g → 0, Q →∞, keeping gQ fixed.Finally,if the charge and current densities are continuous functions, in the classical regime that we are, it also follows that the number density should also be continuous and must satisfy the continuity equation (V_i(x_i, t) is the velocity field) ∂_0 N + ∂_i (N V_i) = 0whose inclusion naturally induces a fluid picture of the system. §.§ Gauge transformations and gauge choiceWe now describe the gauge transformations that leave the set of Y-M equations and the equations of motion for the particle covariant. The transformations are local, by which we mean that the axis n̂ and angle of rotation θ in the colour space vary continuously with position and time coordinates[The special case where there is no variation is called a global transformation.]. LetR(n̂(r⃗, t), θ (r⃗, t)) be the matrix that denotes the transformation. If we define a vector χ⃗ = n̂θ, then,the gauge fields transform as ϕ^a →ϕ^' a = R(n̂(r⃗, t), θ (r⃗, t))^abϕ^b - 1/gQc∂/∂ tχ^a(x_i, t)A_i^a→A_i^' a= R(n̂(r⃗, t), θ (r⃗, t))^abA_i^b + 1/gQ∂/∂ x_iχ^a(x_i, t) It is clear from Eqs (<ref>-<ref>) that the gauge potentials are not vectors in the colour space. For, even if they were vanishing in one coordinate system, they pick up nonzero values once a gauge transformation is performed. The only exception occurs when the transformation is local. Be that as it may, the colour charge, electric field and the colour magnetic field have a covariant character. Thus,under a gauge transformation,we haveQ^a → Q^' a = R(n̂(r⃗, t), θ (r⃗, t))^ab Q^b E_i^a → E_i^' a = R(n̂(r⃗, t), θ (r⃗, t))^ab E_i^b B_i^a → B_i^' a = R(n̂(r⃗, t), θ (r⃗, t))^ab B_i^b The covariance property of the charges and the fields allows us to choose a convenient gauge to work with. As in ED, one may employ a variety of gauges such as Coulomb and Lorentz gauges[The complete formalism presented in the section may be derived from a Lagrangian formalism.]. Yet another very useful gauge, which we shall employ in this paper is the temporal gauge where we set ϕ⃗ =0.§ DIMENSIONAL CONSIDERATIONSThe quantum version of Y-M theory employs only one coupling constant g,and does not introduce the charge Q. That is made possible because the quantum theory has an additional natural constant ħ. The classical theory does not have this luxury and we are perforce required to introduce Q. For this reason, it is instructive to perform a simple dimensional analysis so that we may avoid erroneous comparisons with ED quantities.The dimensions of most quantities are quite similar to the Gaussian units employed in ED. It follows from Eqs (<ref>) and (<ref>) that [A, ϕ ] =M^1/2L^1/2T^-1 [E, B ]= M^1/2L^-1/2T^-1 [g ] = M^-1/2L^-3/2T[Q ] =ML^3T^-2As mentioned, Q does not have the dimension of electric charge e, but it is the product gQ that has the right dimensions. In fact, [Q] = [e^2] and [g] = [e^-1] §.§Scales and normalizationsWe first identify the fundamental scales that are characteristic of Y-M dynamics. As observed, even in the absence ofsources ρ⃗ andJ⃗_i, the fields are self-interacting, with a strength g. Combining it with c, we getthe first scale which has the dimension of angular momentum and is given byH = 1/g^2cIf we consider purely the matter sector, we get a fundamental length and time scale, that are given byℓ = Q/mc^2; τ = ℓ/cMedium-dependent scales emerge when we consider a medium, Y-M plasma in our case. If its equilibrium number density is n_0, new time and length scales emerge which are, apart from multiplicative factors, the familiar plasma frequency and the skin depth. They are given by ω_p = √(4 π n_0g^2Q^2/m); d = c/ω_p Finally, if there is a characteristic speed scale Ualso associated with the plasma, we get yet another length scale given byκ_YM = (8mc ω_p^2/QU)^1/3For instance, U could correspond to the streaming flow velocity of the colour fluid considered in the present study. It is convenient to express all other quantities in units of scale from the above list, chosen depending on the context. In this paper, time scales are expressed in units of ω_p, lengths in units of the skin depth, gauge potentials in units of a = gQ/mc^2 and fields in units of f= gQ/mcω_p.In setting up these scales, we have implicitly assumed that we are working in a non-relativistic regime. Thus mass is treated as a constant.§.§ Units employedIn this paper, we shall employ the scales mentioned in the previous section <ref> to express fields and kinematical variables in appropriate units, and thus deal with dimensionless normalised quantities. The basic units are listed in Table <ref>.In these units, the basic equations obtain an elegant form involving only dimensionless variables. We list them below in order, by suppressing the subscript N (used in Table 1 to indicate normalized variables).The equations defining the colour electric and magnetic fields read E⃗_i =-∂_i ϕ⃗ - ∂_0 A⃗_i + α_YMA⃗_i ×ϕ⃗ B⃗_i=ϵ_ijk∂_j A⃗_k- α_YMϵ_ijkA⃗_j ×A⃗_k Next, the YM equations acquire the form ∂E⃗_i/∂ x_i - α_YMA⃗_i×E⃗_i = ρ⃗ ϵ_ijk∂B⃗_k/∂ x_j- ∂E⃗_i/∂ t - α_YMϕ⃗×E⃗_i -α_YMϵ_ijk( A⃗_j ×B⃗_k)= J⃗_i ∂B⃗_i/∂ x_i-α_YMA⃗_i×B⃗_i =0ϵ_ijk∂E⃗_k/∂ x_j + ∂B⃗_i/∂ t + α_YMϕ⃗×B⃗_i - α_YMϵ_ijk( A⃗_j ×E⃗_k) =0 The equations of motion acquire the formdV_i/dt = ê_Q· ( E⃗_i + ϵ_ijkV_jB⃗_k)d ê_Q/dt = α_YMê_Q× ( ϕ⃗ - V_iA⃗_i)The dimensionless constant appearing in equations (<ref>)-(<ref>) is given byα_YM = mc^3/ω_p QIt should be noted that α_YM essentially corresponds to the ratio of intrinsic length/time scales arising from the nonabelianY-M matter sector (Eq.(<ref>)) with that associated with ED plasma-like scales (Eq.(<ref>)). As expected, asα_YM→ 0, the equations reduce to three decoupled abelian ED equations.Thus, the value of α_YM measures the strength of the nonabelian contribution. Finally, the normalized continuity equation for the number density reads∂ N/∂ t= - ∂/∂ x_i (N V_i)§ DESCRIPTION OF THE SYSTEMThe stage is now set to define the system. At the outset, we set up the notations for the coordinate systems. Rectangular Cartesian coordinate systems will be employed both in the position space and the internal colour space. The orthonormal basis in the position space will be denoted by the set {x̂, ŷ, ẑ}. Similarly, we denote the orthonormal basis in the colour space by the set {ê_1,ê_2,ê_3}. We consider two counter-streaming Yang-Mills fluids in equilibrium. Each stream is made of particles of the same mass m, have the same equilibrium number density n_0, and move with respective velocities ± Ux̂. The corresponding charges of the particles are then given by Q⃗ = ± Qê_1. Accordingly, the total charge density vanishes everywhere, but there is a finite uniform current J⃗_i = g n_0 Q Ue⃗_1(2, 0, 0) in the system. Our object of study is the behaviour of the system under small perturbations, which will be denoted by calligraphic letters. In short,E⃗_i = E⃗_eq,i + ℰ⃗_i; B⃗_i = B⃗_eq,i +ℬ⃗_i; A⃗_i = A⃗_eq,i + 𝒜⃗_i Q⃗_s = Q⃗_eq,S + q⃗_SN_s = N_eq,S + n_S; V_i,S = V_eq,i,S + v_i,S; ρ⃗ = ρ⃗_eq + ϱ⃗; J⃗_i = J⃗_eq,i +j⃗_iwhere S denotes the label for the two-fluid species A andBwhich move with U x̂ and -U x̂ respectively. As mentioned,only the colour current J⃗_iof the overall system comprising these two fluids is non-vanishing at equilibrium.We employ translational invariance along the y-z plane and thus set the equilibrium fields E⃗_eq,i = B⃗_eq,i =0.It is now of interest to seek the linear response of the system when the system is perturbed from its equilibrium configuration slightly. We systematically linearise the equations of Y-M fields, the momentum equation, Wong, and continuity equations and eliminate all the variables in terms of the colour electric field. We restrict to the 2-D space variation of the perturbed fields in the x-y plane. The linearized set of equations isFourier analyzed in space and time. The wave vector k⃗ thus lies in the 2-D x-y plane.In Fig.1 the schematic geometry of the configuration has been shown. We now discuss and characterize the modes.At the outset, we order the set of chromo-electric field components, further organized assubsets,as follows:{{ E_z^(1),E_z^(2), E_z^(3)}, { E_x^(1),E_y^(1)}, { E_x^(2),E_y^(2),E_x^(3),E_y^(3)}}On performing the self-consistent calculation using Eqs (<ref>)-(<ref>), we may arrange the response of the fluids, in the basis given above,in the form of a9× 9 response matrix. The general form of the matrix is given by, R =[ ℛ_3 × 3 0 0; 0 ℛ_2 × 2 0; 0 0 ℛ_4 × 4 ]where, as is clear, the first block corresponds to the response to the electric field fluctuations in the ẑ direction which, as we may recall, is normal to both the k⃗ and U⃗. The second block gives the electric field fluctuations in the x-y plane and along the internal direction ê_1.These two blocks give rise to dispersion relations which are only abelian in nature and their form is essentially the same as ED modes. The third block gives rise to novel nonabelian modes and holds a number of interesting features. They correspond to fluctuations in the x-y spatial plane and ê_2-ê_3 plane in the internal space. The dispersion relations in each sector are obtained by demanding that the determinant of the corresponding matrix ℛ_i × i vanishes. We first discuss the abelian modes in the next section <ref>. § THE ABELIAN MODES§.§ Modes withẑ polarization We start with the topmost block matrix ℛ_3 × 3 in (<ref>). It has the following explicit formℛ_3 × 3 =[ 2/ω^2-k^2 -100;0 -10;00 -1 ]This provides a dispersion relation of ω^2 = k^2 +2, fora colour wave propagating in the x-y plane, which is linearly polarized along the z direction in space and in the internal space it is along ê_1. The other two directions in the colour space are insensitive to the plasma. Note that this chromodynamical mode is essentially the same as the one we obtain in the context of electrodynamic equal mass plasma.§.§ Abelian modes with other orientation of Electric field We now consider the sector corresponding to ℛ_2 × 2. For this case, the colour electric field fluctuations lie in thex-y plane and along ê_1 direction in internal space. This internal space direction corresponds to the equilibrium current flow direction.The response is abelian in this case as theform of the sub-matrix ℛ_2× 2 has nodependence onα_YM and is given byℛ_2 × 2 =[ M_1 -1M_2;M_3 M_4 -1 ]where the matrix elements M_i have the followingcomplicated expressions,M_1= 2/ω^2 - k_y^2(ω^4 + ω^2 U^2k^2 + U^4k_x^2 k_y^2/(ω^2 - U^2 k_x^2 )^2)M_2=-β_y (2 ω^2 U^2 + 2 U^4 k_x^2/(ω^2 - U^2 k_x^2 )^2 - 1)M_3=-β_x (2 U^2 /(ω^2 - U^2 k_x^2 ) - 1) M_4= 2/ω^2 - k_x^2 where,β_x= -k_x k_y/ω^2 - k_x^2 β_y= -k_x k_y/ω^2 - k_y^2 The dispersion relations are inferred from the algebraic condition (M_1-1)(M_4-1) - M_2 M_3 = 0The dispersion relation in this case provides the modes that are identical to their ED counterparts which exhibit two-stream instability and Weibel filamentation. We shall briefly illustrate this by considering the two limits,viz., θ =0, and θ = π/2, that is when the wave vector is parallel and perpendicular to the direction of the flow. For the case when θ = 0, weobtain the following form of the dispersion relation:ω^4 - ω^2 ( 2k^2 U^2 +2) + k^4 U^4 - 2k^2 U^2 = 0It is clear that we will obtain a negative value of ω^2 for all values of the wavenumber which satisfies the relationshipk^2 < 2/U^2. Note that this is,in fact,the same condition as that of the two-stream instability in the ED plasma. Whenθ = π/2,at which k_x = 0 and k_y = k,the dispersion relation is given by ω^4 - ω^2(2 + k^2 ) - 2k^2 U^2 = 0the solution for which is given byω^2 = 1/2((2+k^2) ±√((2 +k^2)^2 + 8 k^2 U^2))It is thus clear that since k and U are both non-vanishing, there is always an unstablebranch with ω^2 < 0. This unstable mode is similar to that of the filamentation mode in the context of ED plasma. Thus the modes for which the electric field vector is directed along the ê_1 direction in the colour space (same direction as that of the equilibrium charge colour), behave in the same manner as ED plasma. § NOVEL NONABELIAN MODESOne of the major endeavours in the study of chromodynamics is to identify phenomena which unmistakably signal the nonabelian features so that the theory may be vindicated. While corrections to the abelian terms are no doubt important, signatures – whose very existence would be ruled out in a purely abelian theory are more valuable. This is particularly true of QGP which is produced in heavy ion collisions. The system that we are studying does show such signatures.As made clear in Eq (<ref>),in this case, the electric field fluctuations lie in thex - y plane in real space and inê_2 - ê_3 plane, in the colour space. The submatrixℛ_4× 4, which governs the responsehas the formℛ_4 × 4= [ -1β_y -i p0;β_x -100;i p0 -1β_y;00β_x -1;]wherep = 2 α_YMU^3 k_x/(ω^2 - k_y^2) (ω^2 - U^2 k_x^2) The structure of ℛ_4 × 4 immediately establishes the identities (ω^2 - k_x^2) ℰ_y^(2,3) = -k_xk_yℰ_x^(2,3)because of which it is sufficient to study the dispersion relations and their modes with the simpler equation ℛ̃_4= [-1 +β_xβ_y-i p; i p -1 +β_x β_y; ][ ℰ_x^(2); ℰ_x^(3); ] = 0The resulting dispersion condition is p̃ = ±(1 - β_x β_y )First, we notice that the eigenmodes of Eq (<ref>) are universal: they are independent of the dispersion relation and are given by ℰ_R,L = ℰ_x^(2)± i ℰ_x^(3)We now look at the detailed structure of Eq (<ref>) whichleads to the quarticpolynomial equation in ω^2, (ω^2 - k_y^2 ){ω^6 - ω^4(U^2k_x^2 + k^2 ) + ω^2(U^2k_x^2[k_x^2+k^2 ] - 2α_YMU^3k_x ) + 2α_YMU^3k_x^3 } = 0(ω^2 - k_y^2 ){ω^6 - ω^4(U^2k_x^2 + k^2 ) + ω^2(U^2k_x^2 [k_x^2+k^2] + 2α_YMU^3k_x )- 2α_YMU^3k_x^3 } = 0 corresponding to the two respective choices for the sign of p̃ in Eq (<ref>).Here the first factor (ω^2 - k_y^2) gives a trivial mode. We now analyze the second factor which is within the curly braces and is a third-degree polynomial in ω^2. We first consider the two specific cases of θ = 0 and θ = π/2. This corresponds to havingk⃗ parallel and perpendicular to U⃗ respectively.§.§ The specific cases (θ = 0 and θ = π/2)When k∥U, Eq (<ref>) reduces to(ω^2 - k_x^2 ){ω^4 - ω^2U^2k_x^2 ∓ 2α_YMU^3k_x } = 0The root(ω^2 =k_x^2 ) correspondsto a trivial solution. The second factor within curly braces leads to ω^2 = 1/2( k_x^2U^2 ±√(k_x^4 U^4 ±α_YM 8 U^3 k_x ))The four solutions for ω^2 merit some discussion.Let us label the respective solutions as ω_±∓ in the same order of signs as they occur in Eq (<ref>). Then the modes belonging to ω_++ and ω_-+ share the common colour polarization in internal space and are given by ℰ_R = ℰ_x^(2) + iℰ_x^(3)at their respective eigenfrequencies. Similarly, the modes belonging to ω_+- and ω_– have the complementary colour modeℰ_L = ℰ_x^(2) - iℰ_x^(3)at their respective eigenfrequencies. It should be noted that in the limitα_YM→ 0, both ω_- +, ω_–→ 0. On the other hand,ω_+ +, ω_+-→± Uk and correspond to the Doppler-shifted frequencies for the two fluids streaming with ± U. When α_YM is finite, a number of features emerge. They are naturally sensitive to the two relative signs. It is clear that ω_++ always gives a propagating mode and ω_-+ always gives growing modes which, for large k are proportional to 1/k^3/2. The other two modes, ω_+- and ω_– have both propagation and growth, the latter being conditional[Growth in one mode is naturally accompanied by damping in its partner mode. They are not of interest to us.] on the inequality k^3 < κ_YM^3≡8α_YM/U, beyond which it vanishes. Note that the growths are governed by a new medium-dependent nonabelian scale, viz, κ_YM. Their detailed features are presented in Table <ref>. It may be seen in Table <ref> that ℰ_R can give unconditional growth while ℰ_L can only give unconditional propagation. This asymmetric situation can be understood by noting that the initial choice for the net colour current is parallel to ê_1 even though the net colour charge density vanishes.For a quantitative illustration, we choose the values (α_YM =3, U = 0.2) and plot both the growth rates ω_-+ andω_+- in Fig <ref>. It should be noted that for this particular choice of parameters, the growth rate of ω_-+ is higher than that of ω_– or ω_+- for all values of k.This completes the description of the modes at θ = 0.Forθ = π/2(i.e. when k_x =0) we do not find any non-trivial nonabelian solutions. §.§ The general case of0 < θ < π/2For finite values ofθbetween 0 and π/2,we havebothk_x = k cosθ and k_y = k sinθ as finite.The factor in the curly braces of Eqs. (<ref>) is thensolved numerically. It is noted that as one increases the value ofθ from zero, the growth rate of the modes generally follows the same trend as has been observed atθ =0. However, the maximum growth rates(γ_max)of both the unstable modes decrease with increasing value of θ. Furthermore, the cut-off wavenumber at which the growth rate of the modes ω_+- and ω_– vanish,increases with θ. These features are illustrated in figures <ref>, <ref>.The cut-off wavenumber as a function of θ,(k_c (θ))has the form of κ_YM f(θ, U). Interestingly, κ_YM is itself composed of three scales which may expressed asκ_YM^3∝(mc^2/Q) (ω_p/c) ( ω_p/U)which are respectively the fundamental YM scale, the skin depth of the plasma,and a configuration-dependent scale. Thus, the novel nonabelian modes that we encounter carry the full richness of the nonabelian dynamics which would not be captured in quantities such as the Debye screening length. When the value of θ is close to π/2 an interesting feature emerges. This can be observed in Fig.<ref> and Fig.<ref>. The growth rate acquires an additional peak at lower values of k for both the growing modes. We will discuss the behaviour of this peak and the physics associated with it in the next subsection (<ref>). Table<ref> summarises the findings described above. There is also an interesting propagating mode in the system which becomes increasingly relevant near θ = π/2 and is responsible for the additional peak in the growth rate that is observed in this regime for the two growing modes. We discuss these issues in detail in the next subsection(<ref>). §.§.§ The propagating mode and the additional peak in the growth rateWe now analyze the dispersion relation of Eq.(<ref>) near θ = π/2 for whichk_x is very small.Thus, retaining terms only upto linearorder ink_x(<ref>) we obtain,2 α_YMU^3 k_x/ω^2(ω^2 - k_y^2 ) = ± 1This implies that there is a likelihood of a propagating solution if (ω± k_y) → 0 as k_x → 0. We seek aconsistency condition for the same. Using continuity the consistency condition may be seen to be given by k_y^3 = α_YM U^3Thus, near θ = π/2, we get a light-like mode propagating nearly parallel toŷ with the wavenumberk given by (<ref>).The overall group velocity of this mode approaches c asθ→π/2 as shown in Fig.(<ref>). This is a purelyelectromagnetic mode and couples to the electric field fluctuations along thex̂ direction only.It is thus important to note that the Y-M plasma permits the vacuum-like propagation of electromagnetic mode at a very specific value of the wavevector with no inherent shielding of plasma. The specific value of the wave vector for which such a vacuum-like propagation happens scales linearly with U and α_YM^1/3.The appearance of the additional peak in the growth rate in Figs. <ref>,<ref> and their inset can now be understood on the basis of this particular radiative mode. A non-monotonicity in any plot signifies the presence of competing physics. Here the free energy available from the flowing plasma also gets coupled with the propagating mode which extracts energy from the system and radiates away. Thereby decreasing the availability of free energy near a certain domain of the spectrum for growth. It should be noted that the eigenvectors of the growing modes and the propagating radiative mode are not the usual normal modes. There is thus a coupling of energy between the two kinds of modes even in the linear regime.The influence of the radiative mode on the growth rate as envisaged above has been conclusively demonstrated in Figs.(<ref>) and (<ref>). The plot of k = k_p (which is a wave vector at which the additional peak in the growth rate appears) as a function of α_YM and U in these figures clearly shows that k_p scales asα_YM^1/3 and U which is similar to wave vector Eq.(<ref>) of the radiative mode. § COMPARISON BETWEEN THEABELIAN AND NONABELIAN GROWTH RATESIt is important, from an observational perspective, to compare the relative strengths of the growth rates of the nonabelian modes with the conventional abelian ones. One general feature is that at θ =0, the two stream-like nonabelian modes belonging to ω_- + dominate over the abelian counterparts. In fact, by tuning α_YM to higher values, it may be made dominant for all k values. More details can be gleaned from Figs. <ref> and <ref> from which we note that for k small, the growth rates of the nonabelian modes belonging to ω_+ - and ω_- - dominate over that of the abelian mode in the region of the parameter space which we have studied.§ SUMMARY AND CONCLUSIONIn this paper, we have studied a Yang-Mills system consisting of two counterflowing fluids of opposite charges and studied the dynamics of instabilities in them with a view to contrast that with the corresponding ED plasma.We find that the nonabelian effects do not present themselves merely as corrections to the familiar ED-like abelian modes,but also yield an entirely new set of modes which bring out the nonabelian dynamics, plasma dynamics and the effects of flow all at once. We believe that these modes which are exclusive to YM dynamics can have significant implications in understanding the physics ofQGP which are produced in heavy ion collisions. The purely propagating modes may be expected to leave their imprint on the resultant particle jets. Similarly, growths in instability may lead to asymmetries in the angular distributions of the hadrons which are ultimately detected. Any prediction of what they will be would be too premature since we have not accounted for the intrinsic nonlinearity in YM equations.Furthermore, for any reasonable comparison, it is also necessary to consider the gauge group SU(3) which would introduce its own additional features. These studies will be taken up separately.§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENTSB: Numerical work, Analysis, Preparation of Manuscript.AD: Conceptualization, Analysis, Preparation of Manuscript. VR: Conceptualization, Analysis, Preparation of Manuscript. BP:Numerical work, Analysis. § DECLARATION OF COMPETING INTERESTThe authors do not present any kind of conflict of interest. § DATA AVAILABILITYNo data was used for the research described in the article. § ACKNOWLEDGEMENTSB wishes to acknowledge the Department of Science and Technology (DST), Govt. of India for providing Inspire Fellowship in support of PhD program. AD acknowledges support from the Science and Engineering Board (SERB)core grants CRG 2018/000624 and CRG/2022/002782 as well asJ C Bose Fellowship grant JCB/2017/000055, Department of Science and Technology, Government of India. ieeetr | http://arxiv.org/abs/2312.16509v1 | {
"authors": [
"Subramanya Bhat K N",
"Amita Das",
"V Ravishankar",
"Bhooshan Paradkar"
],
"categories": [
"physics.plasm-ph",
"hep-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20231227103545",
"title": "Novel Instabilities in Counter-Streaming Nonabelian Fluids"
} |
AIP/123-QEDAn MPI-OpenMP mixing parallel open source FW-H code for aeroacoustics calculation] An MPI-OpenMP mixing parallel open source FW-H code for aeroacoustics calculationSchool of Aeronautic Science and Engineering, Beihang University, Beijing, 100191, China LHD, Institute of Mechanics, Chinese Academy of Sciences, Beijing, 100190, ChinaLHD, Institute of Mechanics, Chinese Academy of Sciences, Beijing, 100190, China School of Aeronautic Science and Engineering, Beihang University, Beijing, 100191, [email protected] [email protected], Institute of Mechanics, Chinese Academy of Sciences, Beijing, 100190, China In this paper, a permeable surface nondimensional FW-H (Ffowcs Williams-Hawkings) acoustics analogy post-processing code with convective effectand AoA (angle of attack) corrections, OpenCFD-FWH, has been developed. OpenCFD-FWH is now used as post processing code of our finite volume CFD solver OpenCFD-EC (Open Computational Fluid Dynamic code for Engineering Computation). However, OpenCFD-FWH can also be used by other CFD solvers with the specified data interface.The convective effect is taken into account by using Garrick Triangle to switch the wind tunnel cases coordinate system to a moving model with fluid at rest coordinate system, which simplifies the FW-H integration formulation and improves the computational efficiency of the code. The AoA effect is also taken into account by coordinate transformation.In order to validate the code, three cases have been implemented. The first two cases are a monopole and a dipole in a mean flow with AoA, and the results of the code and the analytical solution are practically identical. The third case is the well-known 30P30N configuration with a Reynolds number of 1.71×10^6 and an AoA of 5.5^∘. OpenCFD-EC with IDDES (Improved Delayed Detached-eddy simulation) is utilized to obtain the flow field, and the result shows relative good agreement when compared to JAXA experiments. Moreover, the code is implemented in a hybrid parallel way with MPI and OpenMP to speed up computing processes (up to 538.5 times faster in the 30P30N validation case) and avoid an out-of-memory situation. The code is now freely available on <https://github.com/Z-K-L/OpenCFD-FWH>.[ Xinliang Li^*, January 14, 2024 ====================§ INTRODUCTIONWith the escalating demands for environmental protection, aeroacoustics noise hasreceived considerable attention from both the industrial and academic sectors,especially in the aviation sector. Aircraft noise is restricting the development ofairports. Hence, it is very important to conduct far-field noise evaluation duringaircraft development and design stages. One of the most practical ways to evaluate far-field noise of the aircraft is the hybrid CAA (computational aeroacoustics) approach, since the DNS(direct numerical simulation) for far-field noise in engineering problems is unrealistic.The hybrid CAA method involves obtaining unsteady flow field through CFD solvers and thenemploying acoustic analogy equations to calculate far-field noise, which is widely adopteddue to its substantial reduction in computational complexity.For example, Molina et al. <cit.> investigated tandem cylinder noise through DDES (Delayed Detached-eddy simulation) and FW-H (Ffowcs Williams-Hawkings) acoustic analogy. Ma et al. <cit.> investigated aeroacoustic characteristics of Swept Constant-ChordHalf model with four different types of high-lift devices through IDDES(Improved Delayed Detached-eddy simulation) and FW-H acoustic analogy. Hu et al. <cit.>utilized implicit wall-resolved LES (Large Eddy Simulation) and FW-H acoustic analogyto explore the noise reduction mechanisms of TE (trailing edge) serrations. Chen et al.<cit.> also used the hybrid method of LES and the FW-H acoustic analogy, tostudy the noise of flow across a cylinder with varying spanwise lengths.Souza et al. <cit.> carried out LBM (Lattice Boltzmann Method) simulation on the 30P30N high-lift configuration and applied FW-H acoustic analogy to compute the associated acoustic field. Teruna et al. <cit.> analyzed the noise reduction effect of a fully resolved3-D printed porous TE utilizing LBM and FW-H acoustic analogy as well. DNS and FW-H acoustic analogy were conducted by Turner and Kim <cit.> to assess the importance of quadrupolenoise in aerofoil flow separation or stall conditions. They acquired the quadrupole noise by calculating therelative difference between the FW-H results of the solid and permeable surface.Currently, there are only a few open source codes available for FW-H acoustic analogy, such as libAcoustics developedby Epikhin et al. <cit.> for OpenFOAM written in C++, SU2PYFWH developedby Zhou et al. <cit.> for SU2 written in python, and a Farassat 1A solver developed byShen et al. <cit.> for HiFiLES written in C++. However, they all have some problems. Firstthey do not support MPI (Message Passing Interface) parallel to accelerate the computing processes and reduce memory usage by distributing the computing tasks across multiple nodes/computers, which is very important when facing large datasets. Second, they only support FW-H integration solutions for solid surface, which do not account for the quadrupole noise and unable to address cases with porous materials. Third, libAcoustics and SU2PYFWHrequire the installation of OpenFOAM and SU2 software, respectively. Fourth, libAcoustics and SU2PYFWHdo not consider inflow with an AoA (angle of attack). Finally, they lack comprehensive tutorials, making it difficult for others to use their codes with other CFD solvers.Hence, the OpenCFD-FWH code has been developed for our compressible finite volume CFD solver OpenCFD-EC<cit.> (Open Computational Fluid Dynamic code for Engineering Computation). More importantly, it can be utilized by any other solvers with the right data structures. Alternatively, one can modify the data reading module of the code accordingly. The code is based on a permeable surface FW-H integration solution with the Garrick Triangle <cit.> applied to simplify the equations. The inflow with an AoA is also taken into account. Additionally, the code is implemented in a hybrid parallel way to accelerate the computing processes and reduce the memory requirement for a single node/computer. The deployment of the code demands only an MPI library and a Fortran 90 compilation environment. Furthermore, Matlab programs for monopole and dipole validation are provided to generate the required input data for tutorial purposes.The rest of the paper is organized as follows. Sec.<ref> derived the permeable surface FW-H acoustic analogy methods with convective effect and AoA correction. Sec.<ref> depicted the code structure and parallel implementation. Sec.<ref> present the results of the code for three different validation cases. Finally, conclusions are given in Sec.<ref>.§ FW-H ACOUSTIC ANALOGY §.§ Permeable Surface FW-H equationThe FW-H equation <cit.> for permeable surface has the form of:^2c^2(ρ-ρ_0)= ∂/∂ t[Q_nδ(f)]-∂/∂ x_i[L_iδ(f)]+∂/∂ x_i∂ x_j[T_ijH(f)],where the □^2=1/c^2∂^2/∂ t^2-∇^2 is the D'Alembert operator, c is the sound speed, ρ is the density, ρ_0 is the density of the undisturbed medium, δ and H are the Dirac delta and Heaviside function, respectively. The moving surface is described by f(x,t)=0 such that n̂=▿ f is theunit outward normal of the surface <cit.>. The Q_n, L_i and Lighthill tensor stress T_ij are defined as:Q_n =Q_in̂_i=[ρ_0v_i+ρ(u_i-v_i)]n̂_i,L_i =L_ijn̂_j=[P_ij+ρ u_i(u_j-v_j)]n̂_j,T_ij =ρ u_iu_j+P_ij+c^2(ρ-ρ_0)δ_ij,where v_i is the component of the velocity of the moving surface, u_i is the component of the velocity of the fluid, δ_ij isthe Kronecker delta, and P_ij is the stress tensor:P_ij=(p-p_0)δ_ij-σ_ij,where p_0 is the ambient pressure, σ_ij is the viscous stress tensor. Usually σ_ij is a negligible source of soundand is neglected by almost any other FW-H implementations <cit.>. Hence, P_ij=(p-p_0)δ_ij is used in this paper. §.§ Integration solution for general casesNeglecting the quadrupole term in Eqs. (<ref>), and following the derivation procedure of Farassat 1A formulation <cit.>,the integral solution of the FW-H equation for permeable surface can be derived as:4π p_T^'(x,t)=∫_f=0[Q̇_̇i̇n̂_i+Q_iṅ̂̇_̇̂̇î̇/r(1-M_r)^2]_retdS+∫_f=0[Q_n(rṀ_̇ṙ+c_0(M_r-M^2))/r^2(1-M_r)^3]_retdS,4π p_L^'(x,t)=1/c_0∫_f=0[L̇_r/r(1-M_r)^2]_retdS+∫_f=0[L_r-L_M/r^2(1-M_r)^2]_retdS+1/c_0∫_f=0[L_r(rṀ_r+c_0(M_r-M^2))/r^2(1-M_r)^3]_retdS,p'(x,t)=p'_T(x,t)+p'_L(x,t),where x is the observer coordinate vector, t is the observer time, r is the distance between observer and source,c_0 is the sound speed of the undisturbed medium, the superscript "˙" means derivative over the source time τ , and the subscripts T and L represent the thickness and loading components, respectively. M is the Mach number vector of the moving surfacewith component M_i=v_i/c_0. M_r, L_r, and L_M are defined as:M_r =M_ir̂_i,L_r =L_ir̂_i,L_M =L_iM_i,where r̂_i is the component of the unit radiation vector.The subscript ret in Eqs. (<ref>), and (<ref>) means thequantities inside the square brackets are evaluated at retarded time:τ_ret=t-r_ret/c . Despite the quadrupole term in Eqs. (<ref>) is omitted, the quadrupole source inside the permeable FW-H surface is still beaccounted for by Eqs. (<ref>) according to Brentner and Farassat <cit.>. §.§ Integration solution for wind tunnel casesEqs. (<ref>) is derived in a coordinate system that, the source is moving in a stationary medium with observers at restin the far-field. In a wind tunnel case, where both the source and observer are stationary within a uniform flow with an AoA, which is the common scenario in the majority of aircraft CFD cases, the Garrick Triangle <cit.> can be applied to transform the coordinate system. In this new coordinate system, the source is now moving in a stationary medium, while observers remain stationary relative to the source. This will lead to a large simplification of the formulation and increase the computational efficiency of the code.First, let us assume that the mean flow has a velocity of U_0 along the positive x_1 axis direction. The retarded time of Eqs. (<ref>) will be changed to:τ_ret=t-R/c_0 .where R is the effective acoustic distance between the source and the observer <cit.>:R =-M_0d_1+R_*/β^2,R_* =√(d_1^2+β^2[d_2^2+d_3^2]), β =√(1-M_0^2),M_0=U_0/c_0,where d_i=x_i-y_i is the component of distance between the observer and the source.The component of the unit radiation vector is now altered to:R̂_1 =-M_0R_*+d_1/β^2R, R̂_2 =d_2/R, R̂_3 =d_3/R . Next, consider a mean flow with an AoA in the x-y plane, and its velocity magnitude remains equal to U_0. By using the 2D planecoordinate transformationd_1^' = d_1cos(AoA)+d_2sin(AoA),d_2^' =-d_1sin(AoA)+d_2cos(AoA),and bring them into the d_1 and d_2 of the Eqs. (<ref>), and Eqs. (<ref>) yields:R=-M_1d_1-M_2d_2+R_*/β^2,R_* =√((M_1d_1+M_2d_2)^2+β^2[d_1^2+d_2^2+d_3^2]),M_1=M_0cos(AoA),M_2=M_0sin(AoA) . The component of the unit radiation vector is also changed:R̂_1^' =-M_0R_*+d_1^'/β^2R, R̂_2^' =d_2^'/R, R̂_3 =d_3/R . R̂_1 =R̂_1^'cos(AoA)-R̂_2^'sin(AoA), R̂_2 =R̂_1^'sin(AoA)+R̂_2^'cos(AoA) . Then, replacing all the r in Eqs. (<ref>) ∼ Eqs. (<ref>) by Eqs. (<ref>). In addition, both the moving surface and fluid velocity need to subtract the mean flow velocity, since the coordinate system has changed. Now, the distance R is a constant and can be calculated in advance for each observer, rather than in every sampling frame. Moreover, the source time derivative of n̂_̂î,and M_R will be zero because the surface is in uniform rectilinear motion. Therefore, the simplifiedversion of Eqs. (<ref>), and Eqs. (<ref>) for wind tunnel cases take the following form:4π p_T^'(x,t)=∫_f=0[Q̇_̇i̇n̂_i/R(1-M_R)^2]_retdS+∫_f=0[Q_nc_0(M_R-M^2)/R^2(1-M_R)^3]_retdS,4π p_L^'(x,t)=1/c_0∫_f=0[L̇_R/R(1-M_R)^2]_retdS+∫_f=0[L_R-L_M/R^2(1-M_R)^2]_retdS+∫_f=0[L_R(M_R-M^2)/R^2(1-M_R)^3]_retdS,withQ_n=[-ρ_0U_0i+ρu_i]n̂_i,L_i =[P_ij+ρ (u_i-U_0i)u_j]n̂_j,M_R=M_iR̂_i,L_R=L_iR̂_i . Notice that the quantities inside the square brackets in Eqs. (<ref>), and (<ref>) are now evaluatedat the retarded time calculated by Eqs. (<ref>). And the necessary inputs for far-field noise calculationfrom the CFD solver include the coordinate, unit outward normal, and area of the FW-H surface, along with the density,velocity, and pressure pulsation at each sampling frame. §.§ NondimensionalizationSince OpenCFD-EC utilizes dimensionless Navier-Stokes equations, OpenCFD-FWH is based on a nondimensional version of Eqs. (<ref>), and (<ref>) to avoid data conversion errors and computational expenditures.The reference quantity used for the dimensionless transformation is the mean flow quantity, with the exception that the pressure is nondimensionalized by ρ_0U_0^2, and the coordinate is nondimensionalized by the units of the mesh, which yields:ρ^*= ρ/ρ_0, u_i^*=u_i/U_0, v_i^*=v_i/U_0, p^*=p/ρ_0U_0^2,c_0^*= c_0/U_0=1/M_0, x_i^*=x_i/L_ref, y_i^*=y_i/L_ref,dS^* =dS/L^2_ref, t^*=tU_0/L_ref,τ^*=τU_0/L_ref . Then the dimensionless FW-H integration solution for wind tunnel cases can be obtained by replacing the variables inEqs. (<ref>), and (<ref>) to their corresponding nondimensional counterparts:4πp_T^'^*(x^*,t^*)=∫_f=0[Q̇^̇*̇_̇i̇n̂_i/R^*(1-M_R)^2]_retdS^*+∫_f=0[Q^*_n(M_R-M^2)/M_0R^*^2(1-M_R)^3]_retdS^*,4πp_L^'^*(x^*,t^*)=∫_f=0[M_0L̇_R^*/R^*(1-M_R)^2]_retdS^*+∫_f=0[L^*_R-L^*_M/R^*^2(1-M_R)^2]_retdS^*+∫_f=0[L^*_R(M_R-M^2)/R^*^2(1-M_R)^3]_retdS^*.withQ^*_n=[-U^*_0i+ρ^*u^*_i]n̂_i,L^*_i =[P^*_ij+ρ^* (u^*_i-U^*_0i)u^*_j]n̂_j,U^*_01=cos (AoA), U^*_02=sin(AoA), U^*_03=0 . Note that f=0, R̂_i and n̂_i remain unchanged whether the formulations arenondimensional or dimensional. Thus, the superscript "*" will not be necessary.§ IMPLEMENTATION OF OPENCFD-FWHOpenCFD-FWH can be divided into 4 main parts: initialization, pressure signals calculation, data output and finalization, as illustrated in Fig. <ref>.§.§ InitializationThe first step of the code is to initialize all the MPI processors. This involves control file reading, surface geometric data acquiring, assigning surface to the corresponding MPI processor, allocating memory, and reading the location of the observers as well as the FW-H dataset.The essential parameters for the code are specified in the control.fwh file, such as Mach number, AoA, time step, number of observers, number of sampling frames, and number of OpenMP (Open Multi-Processing) threads.An example of the control.fwh file is given in Appendix <ref>.The coordinate y_i, unit outward normal n̂_i, and area dS of each subface in the FW-H surface are included in the FWHSurfaceGeo.dat file. These quantities are specified at the center of the subfacesand split in different Faces, due to OpenCFD-EC is a cell center solver for multiblock structure mesh.A detailed description of the file can be found in Appendix <ref>. With the Faces information acquired, a partition method deployed by OpenCFD-EC for block splitting is applied forload balancing as shown in Fig. <ref>. The method will first rank the Faces by their cell numbers, and then assign the Faces to the processor with the least number of cells in order. As a result, the upper limitfor the utilization of MPI processors by the code corresponds to the number of the Faces. To achieve a faster MPIacceleration result, one can divide FW-H surface as much and as equally as possible during the mesh generation stage, or segment the output FW-H dataset of the CFD solver. Besides, when using a new big.little architectureCPU of Intel, or parallelizing an old system with a new one, one can adopt a partition method that considers theperformance of the processors to accomplish optimal load balancing.Following the MPI partition, each processor will allocate memory for FW-H data at all the sampling frames according tothe assigned Faces. Then the root processor reads the observers.dat file, in which the coordinates of each observer occupy a single row. Then, they are broadcasted to every other processor.Subsequently, the FW-H dataset is read by the root processor and distributed to the corresponding processor. An illustrative description of the FW-H dataset can be found in Appendix <ref>. Finally,memory is allocated for interpolated observer time across all processors, along with final pressure signal result at the root processor. §.§ Pressure signals calculationThe second step of the code is to calculate the pressure signals at one observer, rather than computing at all observers at once for the sake of conserving memory usage. Additionally, OpenMP parallel is deployed on all processors to expedite the calculation procedure. Further details regarding the MPI and OpenMP mixing parallel implementation will be expounded in Section <ref>.The "compute R" subroutine in Fig. <ref> is responsible for calculating the effective acoustic distance based on Eqs. (<ref>) ∼ Eqs. (<ref>).The "compute Noise" subroutine in Fig. <ref> is responsible for calculating the pressure signals at each subface during respective source time, based on Eqs. (<ref>) ∼ Eqs. (<ref>). It is noteworthy that Q_n and L_m remain unchanged in different observers, but they are both not stored to save memory. In addition, Q̇_̇ṅ and L̇_m are computed using second-order schemes, employing a one-sided scheme for the first and last frame, while employing a central scheme for the other frames.The "compute t" subroutine in Fig. <ref> is responsible for calculating the observer time based on:t^*_start=R^*_maxM_0,t^*_end =τ_max+R^*_minM_0 ,where R^*_max and R^*_min are computed by making used of the MPIALLREDUCE function. Consequently, the observer time period, during which all subfaces collectively contribute to the observer pressure signal is t^*_end-t^*_start, as shown in Fig. <ref>. The "interp pressure signal" subroutine in Fig. <ref> is responsible for interpolating the pressure signal of each subface depending on the source time, into pressure signal depending on the observer time, with the help of cubic spline interpolation. After that, both the observer time and source time pressure signal stored at every subface will be deallocated to conserve memory as well. §.§ Data output and FinalizationThe third step of the code is to conduct surface integration across all the subfaces to obtain the pressure signal at a observer and output it in the pobserver-xxx.log (xxx stands for the observer No.) file located within the /FWHresult-mpi folder.The final step of the code is to verify whether all observers have completed their calculations, if the answer is no, the code will loop back to the second step for the subsequent observers. If the answer is yes, all the processors willcall MPIFinalize to end the code.§.§ ParallelizationOpenMP is a widely used API (application programming interface) that supports shared-memory parallelization in multi-coreand multi-processor systems. It is developed to facilitate parallel programming in C, C++, and Fortran, and can be easilydeployed without extensive modifications to the existing serial codes.OpenCFD-FWH is implemented in a hybrid parallel way that the FW-H data surface is spread to many MPI processors, and OpenMP is deployed to split the loop in the computing stage among each MPI processor. This will result in an enormous reduction of the computation time, as shown in Table <ref>. With the use of 31 nodes, each with 32 CPU cores for OpenMP parallelization, a remarkable acceleration of up to 538.5 times is achieved in comparison to the serial condition. When the number of MPI processors (only MPI parallel) and OpenMP threads (only OpenMP parallel) is almost equal, their acceleration effect is nearly the same. Additionally, Table <ref> illustrates that the predominant portion of the execution time is spent on initialization in the hybrid parallel condition. This is attributed to the fact that I/O operations can not be accelerated, as the data reading process requests sequential operations.Furthermore, the 30P30N validation case costs a maximum of 62.6 GB of RAM (Random Access Memory). Without MPI parallelization,the computational demands for larger FW-H datasets can pose significant challenges for nodes and computers with limited memory capacity. Hence, the MPI parallelization ensures successful execution on memory-constrained systems, or for even larger datasets that can easily consume hundreds of RAM.§ VALIDATIONStationary monopole and dipole in a uniform flow with analytic solutions, along with a 30P30N case computed by OpenCFD-EC solver are used to validate OpenCFD-FWH.§.§ Stationary monopole in a uniform flow with AoAThe complex velocity potential for a stationary monopole in a uniform flow is given by Najafi et al. <cit.> as:ϕ(x,t) =A/4π R_*exp[iω(t-R/c_0)] .where A is the amplitude, ω is the angular frequency of the monopole, and i is the imaginary unit. In contrast to Najafi et al. <cit.>, Eqs. (<ref>) and (<ref>) are used here to calculate R and R_*, respectively, taking into account the AoA effect of the uniform flow.The velocity, pressure, and density pulsations induced by the monopole are:u^'(x,t)=∇ϕ(x,t),p^'(x,t) =-ρ_0[∂/∂ t+U_01∂/∂ x_1+U_02∂/∂ x_2]ϕ,ρ^'(x,t)=p^'/c_0^2,U_01 =U_0cos(AoA), U_02=sin(AoA) . The parameters used for the monopole are given in Table <ref>. To avoid any errors introduced by the CFD solver, the FW-H dataset for the code is generated by Eqs. (<ref>) ∼ (<ref>).The permeable FW-H data surface is a sphere with a radius of 2 meters. Its center is located on the monopole. The sphere is divided into 18 segments at the polar angle direction and 36 segments at the azimuth angle direction, resulting in a total of 648 subfaces, as shown in Fig. <ref>. The observer locations are evenly distributed along a circle with a radius of 340 meters in the x-y plane, and its center is also located on the monopole. A Matlab code is written to generate 1000 sampling frames, covering a time period of 2.5 seconds, in just 26 seconds on a laptop equipped with an Intel i7-13620H CPU. Subsequently, the OpenCFD-FWH code processes the dataset in less than 2.5 seconds with 12 OpenMP threads on only one MPI processor for 20 observers. Since the sphere FW-H surface is generated as a single Face.The comparison between the exact monopole solution and the result obtained from the OpenCFD-FWH code of far-field RMS (root mean square) pressure directivity and pressure signal of the right below observer are shown in Fig. <ref> and Fig. <ref>, respectively. Very good agreements are observed between the exact solution and the code. It is worth noting that results with even smaller errors can be achieved by using finerFW-H surface mesh and higher time sampling frequencies, but for the sake of simplicity, these results are not presented here.Moreover, Fig. <ref> demonstrates that the directivity pattern of the monopole is diverted towardsthe inflow due to the convective effect. §.§ Stationary dipole in a uniform flow with AoAThe complex velocity potential for a stationary dipole, with the polar axis coinciding with the x2-axis in a uniform flow is given by:ϕ(x,t) =∂/∂ x_2{A/4πR_*. exp[iω(t-R/c_0)]}. . The velocity, pressure, and density pulsations induced by the dipole are acquired by Eqs. (<ref>) ∼ (<ref>) as well. And the parameters used for the dipole are given in Table <ref>. The FW-H data surface remains consistent with the monopole case, while the observer locations have been relocated to a radius of 50 meters in the x-y plane. Another Matlab code has been developed to generate 1000 sampling frames,covering a time period of 2 seconds. The time required for generating the FW-H dataset and post-processing it is basically the same compared with the monopole case. Fig. <ref> and Fig. <ref> present the result of the dipole far-field RMS pressuredirectivity and pressure signal of the right below observer, respectively. Excellent agreements are also achieved betweenthe exact solution and the code. By applying finer FW-H surface mesh and higher time sampling frequencies, results with essentially no error can be achieved. Again, for the sake of simplicity, these results are not presented here.The mean flow leads to a reorientation of the maximum RMS pressure, resulting in a larger RMS pressure in the inflow direction as shown in Fig. <ref>.All the Matlab programs used for the monopole and dipole validation cases are provided in the Tutorials folder of the OpenCFD-FWH project on GitHub. One can change the parameters in these programs to validate our code and get a better understanding of OpenCFD-FWH. §.§ 30P30N far-field noise predictionThe 30P30N configuration was developed by McDonnell Douglas (now Boeing) in the early 1990s. It has been extensively used in the study regarding the aeroacoustic characteristics of high-lift devices, especially for slat noise <cit.>.The JAXA modified version 30P30N<cit.> is utilized here to validate the code. The airfoil profile of the 30P30N configuration is shown in Fig. <ref>, with a stowed chord length of c_s=0.4572 m. Both the deflection angles of the slat and flap are 30^∘, with the chord lengths of the slat and flap being 0.15c_s and 0.3c_s, respectively.IDDES based on SA turbulence model is carried out on the OpenCFD-EC solver. The inflow Mach number is 0.17, with an AoA of 5.5^∘.The Reynolds number based on the stowed chord length is 1.71× 10^6. Fig. <ref> depicts a sketch of the computational domain. It extends 50c_s in theforward and vertical directions and 75c_s in the rear direction. Its length in the spanwise direction equals to 1/9c_s, following the recommendation in the BANC-III workshop<cit.> (the 3rd AIAA Workshop on Benchmark Problems for Airframe Noise Computations). A periodic boundary condition is applied in the spanwise direction. The permeable FW-H data surface is indicated by the blue line in Fig. <ref>, which is one stowed chord length away from the 30P30N airfoil and stretches 5c_s in the wake flow direction. No endcap is used to avoid the spurious (numerical) noise created by wake flows crossing the permeable FW-H data surface <cit.>. The spanwise length of the FW-H surface is identical to the computational domain. Besides, a sponge layer is deployed at the boundaries of the computational domain, where the viscosity is adjusted to 100 times the value used in the physical domain to mitigate reflections at the domain boundaries, in accordance with the approach taken by Himeno et al. <cit.>. A multiblock structure mesh with C-type topological is created, yielding a total cell count exceeding 43 million. Each plane of the 2.5D mesh comprises approximately 0.25 million cells, and the entire mesh is composed of 175 planes with equal spacing in the spanwise direction. Close-up views of the mesh around FW-H surface and slat cove area are shown in Fig. <ref>. Additionally, the average value of the dimensionless wall distance y+ of the mesh is below unity. The well-known Roe scheme is employed to decompose the inviscid flux, with the third-order MUSCL scheme for variable reconstruction. The implicit dual-time LU-SGS method is applied for time advancement, with a time step of 2× 10^-7s. Five inner subiterations are used, with local time-stepping approach to accelerate the convergency process. And the FW-H data sampling interval is 1× 10^-5s. An RANS simulation with SA turbulence model is carried out to initialize the flow field. Subsequently, approximately 0.1s of physical time is calculated by IDDES-SA, with 0.06534s available for data processing after removing the initial transient. A time average C_L of 2.6214 is obtained, with a difference less than 2.3% compared to the average outcomes of the BANC-III<cit.> 2.6821. Fig. <ref> presents the time average C_p distribution obtained over the last 0.036 seconds. While there is a slight underprediction of negative pressure on the suction side, a reasonably good agreement can be seen, especially in the slat cove region, when compared with the JAXA Kevlar experiment<cit.> under 7^∘ AoA.The scaling method used by Avallone et al.<cit.> is utilized to take into account the difference between the spanwise acoustic integration size of the numerical simulation and experiment:PSD_corr=PSD+10log_10(b_exp/b_m),where b_exp and b_m are equal to 650 mm and 50.8 mm, respectively.The pressure signal obtained by the OpenCFD-FWH code is segmented into blocks with a 50% overlap, and a Hanning window is employed. A total of 6535 sampling frames are input to the code, and the running time with differentparallel strategies can be found in Table <ref>. As shown in Fig. <ref>, the PSD(Power Spectral Density) result of the code is in good consistency with the JAXA hard-wall experiment at frequencies below 10 kHz. Both the results exhibit a slightly higher noise level in the low-frequency range compared to the JAXA kevlar-wall experiment. Furthermore, the humping noise originating from the high-frequency vortex shedding from the slat TE of the reduced-scale wind tunnel model is absent in the FW-H result. This is attributed to the relatively coarse mesh at the slat TE, which is unable to capture the high-frequency vortex shedding. Overall, the result validates that the far-field noise can be accurately evaluated by the OpenCFD-FWH code. § CONCLUSIONSThis paper presents the methodology, parallel implementation and validation of a post-processing code: OpenCFD-FWH, designed specifically for predicting far-field noise in wind tunnel cases, encompassing nearly all scenarios encountered in aircraft CFD cases. It is developed to use the flow field results of our OpenCFD-EC solver as input. However, it can be readily deployed for use with other solvers by modifying the data reading part of the code or converting the FW-H dataset to the required format. Moreover, the deployment of the code only required an MPI library and a Fortran 90 compilation environment, without the need to install OpenCFD-EC or other affiliated libraries.The code is based on the integration formulation of a nondimensional FW-H equation for permeable surface with convectiveand AoA effects corrected by Garrick Triangle, and 2D plane coordinate transformation, respectively. This formulation will increase the computational efficiency compared to the original one. Additionally, the nondimensionalization of the FW-H equation is the same as the nondimensionalization of the Navier-Stokes equations in the OpenCFD-EC solver.MPI-OpenMP mixing parallelization is implemented to accelerate the post-processing process and reduce memory usage on a single node/computer when deploying the code on distributed computing systems. When dealing with very large datasets, as is common in aeroacoustic noise research related to landing gear or high-lift devices with LES, it can avoid an out-of-memory situation. On the CAS SunRising platform, by utilizing 31 nodes, each with 32 OpenMP threads, the computing time of the code is 538.5 times faster compared to the serial implementation. This demonstrated the high operational efficiency of OpenCFD-FWH.Three validation cases are considered in this paper. The monopole and dipole cases are compared with exact analytical solutions, and excellent agreements are achieved. The 30P30N configuration is used in the third case, with the flow field variable produced by IDDES-SA simulation via the OpenCFD-EC solver as input. For frequencies below 10 kHz, the far-field PSD result demonstrates relatively good agreement with JAXA experiments, particularly with the JAXA hard-wall experiment. However, the result of the code does not present the high-frequency hump observed in the experiments. This is due to the inability of the coarse mesh to capture the high-frequency vortex shedding at the slat TE. Overall, the code is deemed validated.The code is openly accessible on GitHub, along with the Matlab codes for the monopole and dipole validation cases to facilitate its utilization by readers.This work was supported by the National Natural Science Foundation of China (Grant No. 12272024), National Key Research and Development Program of China (Grant Nos. 2020YFA0711800, 2019YFA0405302, 2019YFA0405300) and NSFC Projects (Grant Nos. 12072349, 12232018, 12202457), National Numerical Windtunnel Project, Science Challenge Project(Grant No. TZ2016001), and Strategic Priority Research Program of Chinese Academy of Sciences (Grant Nos. XDCO1000000 and XDB0500301).The authors thank CNIC (Computer Network Information Center), CAS (Chinese Academy of Sciences) for providing computer time. § AUTHOR DECLARATIONS §.§ Conflict of InterestThe authors have no conflicts to disclose. §.§ Author ContributionsKeli Zhang: Conceptualization (lead); Data curation (lead); Investigation (lead); Methodology (lead); Coding (lead); Writing - original draft (lead). Changping Yu: Funding acquisition (equal); Supervision (lead); Writing - original draft (equal); Writing - review & editing (equal). Peiqing Liu: Funding acquisition (equal); Supervision (equal); Writing - review & editing (equal). Xinliang Li: Funding acquisition (equal); Supervision (supporting); Coding (supporting); Writing - review & editing (equal).§ DATA AVAILABILITY STATEMENTThe data that support the findings of this study are available from the corresponding author upon reasonable request.§ FILE STRUCTURES FOR OPENCFD-FWH §.§ An example of the control.fwh fileOpenCFD-FWH utilized namelist method to read in control.fwh file. An example of the file is presented in Fig. <ref>, with the values therein representing the default settings in OpenCFD-FWH. Kstepstart and Kstepend determine the start and final steps for FW-H post-processing, respectively. With the step interval: deltastep, OpenCFD-FWH can calculate the number of sampling frames. FWH data Format decided whether the FW-H dataset is in binary or ASCII format (0 for binary and 1 for ASCII).§.§ Structure of the FWHSurfaceGeo.dat fileFWHSurfaceGeo.dat file is in ASCII format in convenient for data checking. It follows a structure similar to the Generic boundary description .inp file. It starts with a first line of text: variables=x,y,z,n1,n2,n3,dS, as illustrated in Fig. <ref>. The second line contains a single number indicating the total number of Faces. Subsequently, is the nx, ny, nz for one Face, along with the x, y, z, n1, n2, n3, dS values for its subfaces, until the last Face. Note that each Face is described in a block way, consequently one of the nx, ny, nz will be 1.§.§ Structure of the FW-H datasetThe FW-H dataset used for OpenCFD-FWH comprised multiple FWH-xxxxxxxx.dat files, where xxxxxxxx denotes the iteration steps. The file can be in either binary or ASCII format, with binary format being recommended for its significantly smaller data size. A schematic of the file structure is provided in Fig. <ref>. § REFERENCES* | http://arxiv.org/abs/2312.16263v1 | {
"authors": [
"Keli Zhang",
"Changping Yu",
"Peiqing Liu",
"Xinliang Li"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231226084857",
"title": "An MPI-OpenMP mixing parallel open source FW-H code for aeroacoustics calculation"
} |
[email protected] Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA DEVCOM Army Research Laboratory, Adelphi, MD 20783-1193, USA Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Centre Suisse d’Electronique et de Microtechnique (CSEM), 2000 Neuchâtel, Switzerland Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Centre Suisse d’Electronique et de Microtechnique (CSEM), 2000 Neuchâtel, SwitzerlandCentre Suisse d’Electronique et de Microtechnique (CSEM), 2000 Neuchâtel, Switzerland Centre Suisse d’Electronique et de Microtechnique (CSEM), 2000 Neuchâtel, Switzerland Centre Suisse d’Electronique et de Microtechnique (CSEM), 2000 Neuchâtel, Switzerland Department of Electrical and Computer Engineering, University of Illinois - Chicago, Chicago, IL 60607, USA DEVCOM Army Research Laboratory, Adelphi, MD 20783-1193, USA Tulane University, New Orleans, LA 70118, USA DEVCOM Army Research Laboratory, Adelphi, MD 20783-1193, USA Raytheon BBN Technologies, Cambridge, MA 02138, USA Centre Suisse d’Electronique et de Microtechnique (CSEM), 2000 Neuchâtel, Switzerland Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA [email protected] Department of Electrical and Photonics Engineering, Technical University of Denmark, 2800 Lyngby, DenmarkThin-Film Lithium Niobate (TFLN) is an emerging integrated photonic platform showing great promise due to its large second-order nonlinearity at microwave and optical frequencies <cit.>, cryogenic compatibility <cit.>, large piezoelectric response <cit.>, and low optical loss at visible <cit.> and near-infrared <cit.> wavelengths. These properties enabled Mach-Zehnder interferometer-based devices to demonstrate amplitude- <cit.> and in-phase/quadrature (IQ) <cit.> modulation at voltage levels compatible with complementary metal-oxide-semiconductor (CMOS) electronics. Maintaining low-voltage operation requires centimeter-scale device lengths, making it challenging to realize the large-scale circuits required by ever-increasing bandwidth demands in data communications <cit.>. Reduced device sizes reaching the 10 m scale are possible with photonic crystal (PhC) cavities. So far, their operation has been limited to modulation of amplitudes and required circulators <cit.> or lacked cascadability <cit.>. Here, we demonstrate a compact IQ modulator using two PhC cavities operating as phase shifters in a Fabry-Perot-enhanced Michelson interferometer configuration <cit.>. It supports cascadable <cit.> amplitude and phase modulation at GHz bandwidths with CMOS-compatible voltages. While the bandwidth limitation of resonant devices is often considered detrimental, their compactness enables dense co-integration with CMOS electronics where clock-rate-level operation (few GHz) removes power-hungry electrical time-multiplexing <cit.>. Recent demonstrations of chip-scale transceivers with dense-wavelength division multiplied transceivers <cit.> could be monolithically implemented and driven toward ultimate information densities using TFLN electro-optic frequency combs <cit.> and our PhC IQ modulators.Photonic crystal cavity IQ modulators in thin-film lithium niobate for coherent communications Mikkel Heuck January 14, 2024 ==============================================================================================§ INTRODUCTION Modern telecommunications rely on electro-optic (EO) modulators to convert information between electrical and optical signals <cit.>. The exponentially increasing demand for information capacity <cit.> and growing interest in networking superconducting quantum circuits <cit.> motivates the development of small-footprint EO modulators with low power consumption that can be densely integrated with electronic processors <cit.> while operating near the fundamental limits given by the interaction between microwave and optical photons <cit.>. Coherent communications have proven instrumental in leveraging existing technology for high bandwidth internet protocol optical routing <cit.> and enhancing throughput in long-haul fiber networks <cit.>, while promising similar features for data center interconnects and edge computing <cit.>. As illustrated in fig1a, these advances in coherent communications hinge on in-phase/quadrature (IQ) modulators, which are able to control both the amplitude and phase of optical fields and are currently sustained by commercially available InP-based devices <cit.>. Further technological requirements have driven rapid advances in photonic integrated circuits (PICs) with EO modulators <cit.> based on interactions including free-carrier dispersion <cit.>, the quantum confined Stark effect <cit.>, and the Pockels effect <cit.>. Advances in silicon photonics have notably enabled a new generation of coherent optical engines <cit.> along with more compact implementations relying on microring phase shifters <cit.>. Such free carrier-based devices face fundamental trade-offs between insertion loss and modulation efficiency that ultimately cap their performance. The pure phase response of Pockels materials can overcome these challenges. For example, thin-film lithium niobate (TFLN) is a promising PIC platform due to its wide transparency window, large Pockels coefficients r_ij, and low waveguide loss <cit.>. When arranged in a traveling-wave Mach-Zehnder configuration, TFLN modulators achieve modulation rates exceeding 100 GHz <cit.> and can naturally integrate into IQ modulator architectures <cit.>. However, as emphasized in fig1a, their length needs to extend over several millimeters to reach sufficient microwave-to-optical interaction strengths, which could prevent their use in applications requiring high co-integration densities. The modulator size has been reduced using structures such as folded Michelson interferometers <cit.> and microring-assisted Mach-Zehnder interferometers <cit.>. Dielectric photonic crystal (PhC) cavities provide wavelength-scale confinement without compromising insertion loss. As shown in a recent demonstration of off-keying in TFLN PhC cavities <cit.>, this resonant modulation scheme preserves the alignment between LN's Pockels tensor and the modulating electric field over a device with an ultra-small capacitance and an optical mode volume as low as 0.58 µm^3. However, it has remained an open challenge to develop devices with 2 degrees of freedom – the minimum needed for arbitrary modulation of the two field quadratures. Here, we solve this challenge by introducing an ultrasmall TFLN PhC IQ modulator, taking advantage of wavelength-scale confinement through PhC cavities in an interferometric configuration. We demonstrate four symbol quadrature- and amplitude modulation (4-QAM) with a complementary-metal-oxide-semiconductor (CMOS) compatible peak-to-peak driving voltage of 2V. The modulation rate of ∼1GHz is limited by the cavity quality factor (Q) of ∼70,000, and our electrode configuration results in a tuning efficiency of ∼1GHz/V. Through iterative co-design and testing of cavity Bragg mirrors and stable fabrication process control in wafer-scale TFLN manufacturing, we achieve a fabrication yield exceeding 64% for PhCs with Q values above 20,000 across devices with design parameters specified in the Supplementary. § RESULTSThe conventional ring-resonator-enhanced Mach-Zehnder architecture <cit.> does not carry over to PhC cavities, as they couple the incident field to forward- and backward-propagating waves. We therefore developed a different design: a cavity-assisted on-chip Michelson interferometer, inspired by laser interferometric gravitational wave detectors <cit.> that use two arms of one-sided Fabry Perot cavities. In our design, a directional coupler with a ζ^2:1-ζ^2 splitting ratio distributes an input optical signal to two one-sided Fabry-Perot PhC cavities where light couples to the waveguides at rates κ_c,1 and κ_c,2, see fig1b.Pairs of electrodes apply electric fields across the TFLN cavities by means of the voltages V_1 and V_2. To first order in V_n, the cavity resonance frequencies ω_n shift by (see Supplementary Section II) tuning efficiencyΔω_n = ω_n - ω_n^(0) = ∂ω_n/∂ VV_n ≡∂_Vω_n V_n, n(1,2), where ω_n^(0) is the resonance at zero voltage, and ∂_Vω_n is the tuning efficiency. Cavity loss is described by the intrinsic decay rates κ_i,1 and κ_i,2. After reflection from the cavities, the modulated signals travel back across the directional coupler and interfere. The input-output transmission is (see Supplementary Section III) transmissiont_IQ(ω)= ζ√(1-ζ^2)(r_1(ω)e^iΔϕ + r_2(ω)), tIQ r_n(ω)= 1 - 2κ_c,n/i2δ_n + κ_c,n + κ_i,n, rnwhere r_n(ω) is the cavity reflection coefficient, δ_n ω_n - ω is the detuning from the input carrier frequency, κ_n κ_c,n + κ_i,n is the total linewidth, and Δϕ describes the relative phase between the interferometer arms. The transmission t_IQ attains any complex value within the unit circle in the limit of highly over-coupled cavities (κ_c,n≫κ_i,n) with a 50:50 directional coupler. The condition for complete extinction (t_IQ 0) is independent of the splitting ratio ζ, which is not the case for Mach-Zehnder implementations containing two distinct couplers. Figure <ref>a shows a micrograph of our fabricated TFLN IQ modulator. The top-right inset sketches the PhC cavity to illustrate its formation by modulating the width of a waveguide. As detailed in Supplementary Section IV, we vary the duty cycle parabolically over N_cavity periods to produce a high Q resonance <cit.>. Placing fewer mirror periods on the side facing the waveguide (N_mirr,R<N_mirr,L) achieves a one-sided configuration. A smooth transition to the propagating waveguide mode minimizes out-of-plane scattering by linearly reducing the width modulation to zero over N_taper periods.Figure <ref>b plots the total- (purple), coupling- (blue), and intrinsic (red) quality factors, calculated using finite-difference-time-domain (FDTD) simulations, as a function of the number of mirror periods N_mirr for a two-sided cavity (N_mirr,L N_mirr,R N_mirr). It highlights how Q is easily adjusted to match a targeted modulation speed without sacrificing the intrinsic quality factor Q_i,nω_n/κ_i,n. The corresponding simulated transmission spectra are shown in fig2c, and the measured spectrum from a two-sided reference cavity is plotted in fig2d. The good agreement between measurement and simulation (blue curves in Figs. <ref>c,d) results from extracting geometrical information via scanning-electron-microscope images and additional reference structures (see Supplemental Section V for details).We calculate a tuning efficiency of ∂_V ω_n2π× 1.0 GHz/V via first-order perturbation theory based on the overlap between the optical cavity mode and the field from the electrodes (see Supplementary Section VI).Figure <ref>e shows how the electrode field (streamlines) and the optical field (blue contour) are parallel to maximize their overlap. Experimentally, we determine the tuning efficiency by measuring the transmission at different voltage settings. Figure <ref>a plots two spectra with the resonances aligned (red) or separated (blue). Maps of transmission versus frequency and voltage across one of the cavities are shown in Figs. <ref>b,c. The transmission dips caused by cavity resonances are observed to shift linearly in response to the applied voltage. As described in Supplementary Section VII, we fit the data from Figs. <ref>b,c to (<ref>), thereby obtaining the model parameters listed in fitted parameters table. Figure <ref>a plots the small-signal modulator response when each PhC cavity is driven by a sinusoidal voltage signal. We choose the DC voltage offsets and laser wavelength to maximize the signal-to-noise of the transmitted light (see inset).Each cavity has a 3 dB cutoff around 1.5 GHz, which matches well with the fitted decay rates listed in fitted parameters table.To better understand how to set DC bias voltages for QAM modulation, we measure the transmission as a function of both voltages at a fixed laser wavelength. The result is shown in fig4b, and fig4c plots the simulated transmission map from (<ref>) using the parameters in fitted parameters table.Destructive interference between the signals reflected from the two PhC cavities gives rise to the local minima exhibiting more than 30 dB extinction.The good agreement between measurement and modeling allows us to use the transmission phase {t_IQ(V_1,V_2)} calculated from (<ref>) to set the DC-bias point at (V_1,V_2)(4.4V,-4.15V) while applying a radio-frequency (RF) modulation of ±1 V to each cavity. Figure <ref>a plots this phase map, and the RF voltages of a pseudo-random bit sequence with 2^7-1 symbols are plotted with semi-transparent white lines. Notice the large phase variations in the region between the two singularity points corresponding to the transmission minima in fig4c. Separated minima are only possible when the cavities are sufficiently close to being over-coupled. Figure <ref>b plots the voltages of a subset of the applied bit sequence for an example with a 20 MHz repetition rate. We collect the IQ-modulated signal by a lensed fiber and detect it using a silicon-photonics integrated IQ receiver (see Supplementary Section VIII).In fig5c, we plot the measured raw coherent transmission trace of a continuous wave (CW) input field modulated over a time span of 6.5 µs. Sampling the clustered points in fig5c allows the reconstruction of the modulated field's constellation diagram. We opt for a 1 GHz sampling frequency instead of the repetition rate of the driving pulses to consider a consistent amount of samples across the full set of modulation frequencies. Figure <ref>d provides such diagrams for modulation frequencies of 20 MHz, 100 MHz, 250 MHz, 500 MHz, and 1 GHz. Our results feature good data clustering at four distinct symbols exhibiting error vector magnitudes below 0.27 (see Methods) up to driving frequencies approaching the 3 dB cutoff of our IQ modulator. As discussed in Supplementary Section IX, optimized symbol separation is possible with more advanced encoding to account for the nontrivial dependence of t_IQ on V_1 and V_2. Such optimization procedures can also determine minimum device metrics for running coherent modulation processes. For example, given the insertion loss of our device, we require PhC cavity quality factors of at least Q∼ 2×10^4 to run the 4-QAM experiments shown in this work. This condition determined our reported cavity fabrication yield of 64%. § DISCUSSIONThe compact size of our IQ modulator allows its energy consumption to be limited by its capacitance. This is a key requirement for low energy information processing <cit.> based on attojoule optoelectronics that could benefit emerging applications in photonics-based edge computing and inference <cit.>. As discussed in Supplementary Section IV, we estimate an average value of 25.8 fJ per bit, though it could be reduced below 1 fJ/bit by appropriate design modifications <cit.>. Compact and energy-efficient modulators reopens the trade space comprised by multiplexing in the temporal, spatial, and spectral domains <cit.>. The moderate bandwidth of energy-efficient high-Q resonant modulators need not be a drawback since operating at a few GHz avoids power-hungry tasks such as electronic serialization <cit.> as well as clock and data recovery. For instance, a recent demonstration used silicon microring resonators for amplitude-modulation of 32 wavelength channels generated from a single laser using a silicon-nitride Kerr comb <cit.>. Similar TFLN implementations could monolithically integrate electro-optic combs <cit.> and our PhC IQ modulators to reduce footprint further and eliminate chip-to-chip coupling loss. Importantly, our PhC IQ modulators are cascadable like rings <cit.> since the transmission approaches 1 away from the resonances when Δϕ 0 and ζ^2 1/2. The TFLN platform also benefits from recently introduced components, such as on-chip lasers <cit.>, amplifiers <cit.>, and isolators <cit.>. Compact multiport switches were also proposed based on one-sided PhC cavity phase modulators <cit.>.Reducing the interaction volume of electro-optic coupling between optical and RF fields and TFLN's cryogenic compatibility <cit.> introduces new prospects for quantum computing and networking, especially between microwave and optical single photons. Current implementations rely on coupled racetrack cavities <cit.> with footprints that could be reduced by several orders of magnitude by switching to PhC cavities. Electro-optic control over tightly confined optical cavity modes was proposed for nonlinear quantum information processing <cit.> and would similarly benefit systems with integrated quantum emitters <cit.>.Future work should focus on stabilizing the optical response of our devices.Such considerations include minimizing transmission drifts due to photorefractive effects, which are known to be significant in TFLN cavities <cit.>. Mitigation strategies include cladding removal <cit.>, elevated operating temperature <cit.>, and doping <cit.>. For classical interconnect applications with significant variations in operating temperature, feedback control loops will be necessary <cit.>. Machine learning-assisted state-estimation <cit.> could play a crucial role in stabilizing the modulator's transmission and replacing conventional discrete signal processing methods to address channel mixing in coherent communications. Future investigations should additionally include energy reductions by replacing ohmic heaters <cit.> with non-volatile tuning mechanisms, such as phase-change materials <cit.>, electro-mechanical effects <cit.>, or laser annealing of oxides <cit.>. In summary, we introduced an ultra-compact PIC-based electro-optic IQ modulator. By incorporating a pair of tunable PhC cavities in TFLN integrated photonics, we demonstrated GHz-rate coherent modulation of an optical field using CMOS-compatible driving voltages and a footprint of 40-by-200 µm^2. Further size reduction is straightforward <cit.>, which will pave the way towards dense co-integrated CMOS electronics and optical IQ modulators for large-scale EO modulation.§ METHODS Device Fabrication We fabricated our chip in one of CSEM's TFLN multi-project fabrication runs based on a 600 nm thick x-cut TFLN on insulator wafer from NanoLN. We etch the LN waveguides and PhCs using an HSQ mask patterned with electron-beam lithography. The etch is configured to remove 400 nm of LN and result in waveguides with a 35^o sidewall angle with respect to the normal of the chip. Within the gaps of the PhC's Bragg mirrors, SEM imaging and modeling of measured transmission data reveal that the sidewall angle is closer to 17^o (see Supplemental Section V). We pattern 500 nm thick gold electrodes with a liftoff process. Waveguides are designed to have a width of 800 nm that tapers out to 900 nm once they reach the PhC region of the device. We use a 660 nm gap in our modulator's directional coupler. PhC Design Parameters We set the Bragg period in our IQ modulator's PhC cavities to 426 nm and the number of Bragg periods in the input mirror to N_mirr,R=30. The duty cycle of the Bragg mirrors is 68% and tapers up to 83% at the cavity center. We provide further details related to this tapering in Supplementary Section IV. For the experimental transmission measurements of the two-sided cavity shown in Fig. <ref>f, we show the results of the fabricated cavity with parameters most similar to our IQ modulator device. Here, the number of mirror periods is N_mirr,L=N_mirr,R=40 and the duty cycle of the cavity region is 80%, while the Bragg period and the duty cycle of the mirrors are the same. Simulation parameters As specified by the fabrication process, our simulations assume a 600 nm thick TFLN membrane with a 400 nm ridge and a sidewall angle of 35^o attributed to the sides of the waveguide. We set the sidewall angles in the gaps formed by the Bragg structure to 17^o as approximated from SEM imaging and modeling. We provide further details on how these geometric parameters affect the transmission of the cavities in the Supplementary. We performed all finite-difference-time-domain (FDTD) simulations provided in this work with Ansys's Lumerical tools. Bandgap wavelengths of infinite Bragg mirrors were simulated using MIT Photonic Bands (MPB). We performed all finite element method (FEM) simulations with COMSOL Multiphysics. Error Vector Magnitude Calculation We rely on the following definition of the error vector magnitude (EVM) for each symbol of a constellation diagram: EVM =√(1/N∑_n=0^N-1(i_n-i_0)^2 + (q_n-q_0)^2/i_0^2 + q_0^2) where N is the number of acquired samples attributed to a symbol, (i_n, q_n) corresponds to the measured quadratures of the samples, and (i_0, q_0) are the expected quadrature values of the symbol. The reported values attributed to a single constellation diagram correspond to the average EVMs across all of the diagram's symbols.Acknowledgements H. L. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Army Research Laboratory (Awards W911NF2120099 and W911NF2220127), and the QISE-NET program of the NSF (NSF award DMR-1747426). M. H. acknowledges funding from Villum Fonden (QNET-NODES grant no. 37417). The authors acknowledge Ryan Hamerly, Cole Brabec, Saumil Bandyopadhyay, Jane E. Heyes,Mingxiao Li, and Usman A. Javid for useful discussions. | http://arxiv.org/abs/2312.16746v1 | {
"authors": [
"Hugo Larocque",
"Dashiell L. P. Vitullo",
"Alexander Sludds",
"Hamed Sattari",
"Ian Christen",
"Gregory Choong",
"Ivan Prieto",
"Jacopo Leo",
"Homa Zarebidaki",
"Sanjaya Lohani",
"Brian T. Kirby",
"Öney O. Soykal",
"Moe Soltani",
"Amir H. Ghadimi",
"Dirk Englund",
"Mikkel Heuck"
],
"categories": [
"physics.optics",
"physics.app-ph"
],
"primary_category": "physics.optics",
"published": "20231227234103",
"title": "Photonic crystal cavity IQ modulators in thin-film lithium niobate for coherent communications"
} |
=1 | http://arxiv.org/abs/2312.16286v1 | {
"authors": [
"Roberta Angius",
"Andriana Makridou",
"Angel M. Uranga"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231226190004",
"title": "Intersecting End of the World Branes"
} |
Goal-Oriented Communication, Estimation, and Control over Bidirectional Wireless Links Jie Cao, Member, IEEE, Ernest Kurniawan, Senior Member, IEEE, Amnart Boonkajay, Member, IEEE, Nikolaos Pappas, Senior Member, IEEE, Sumei Sun, Fellow, IEEE, Petar Popovski, Fellow, IEEE Jie Cao is with the School of Electronic and Information Engineering, Harbin Institute of Technology, Shenzhen 518055, China (Corresponding author: Jie Cao, e-mail: [email protected]). Ernest Kurniawan,Boonkajay Amnart and Sumei Sun are with the Institute of Infocomm Research, Agency for Science, Technology and Research, Singapore 138632. Nikolaos Pappas is with the Department of Computer and Information Science, Linköping University, 58183 Linköping, Sweden. Petar Popovski is with the Department of Electronic Systems, Aalborg University, Danish. Part of this work has been accepted for 2023 IEEE Globecom<cit.>. January 14, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We consider a wireless networked control system (WNCS) with bidirectional imperfect links for real-time applications such as smart grids. To maintain the stability of WNCS, captured by the probability that plant state violates preset values, at minimal cost, heterogeneous physical processes are monitored by multiple sensors. This status information, such as dynamic plant state and Markov Process-based context information, is then received/estimated by the controller for remote control. However, scheduling multiple sensors and designing the controller with limited resources is challenging due to their coupling, delay, and transmission loss. We formulate a Constrained Markov Decision Problem (CMDP) to minimize violation probability with cost constraints. We reveal the relationship between the goal and different updating actions by analyzing the significance of information that incorporates goal-related usefulness and contextual importance. Subsequently, a goal-oriented deterministic scheduling policy is proposed. Two sensing-assisted control strategies and a control-aware estimation policy are proposed to improve the violation probability-cost tradeoff, integrated with the scheduling policy to form a goal-oriented co-design framework. Additionally, we explore retransmission in downlink transmission and qualitatively analyze its preference scenario. Simulation results demonstrate that the proposed goal-oriented co-design policy outperforms previous work in simultaneously reducing violation probability and cost. Goal-oriented and semantic communications,Wireless Networked Control Systems,Age of Information § INTRODUCTION Wireless networked control systems (WNCSs) are receiving increasing attention and are expected to propel the development of emerging cyber-physical and real-time applications such as smart grid, factory automation, and autonomous robot<cit.>. As shown in Fig. 1, a typical WNCS comprises a plant with dynamic states, multiple observation sensors, a remote control controller, and a set of actuators. Due to the coupling of these elements, the design of sensing, communication, and control is, therefore, a challenge. To accomplish specific tasks (e.g., accurate tracking or stable control) in WNCSs<cit.>, information distilled from observations and control commands is required to be fresh, accurate,and useful to the corresponding goals. However, heterogeneous sensors observing different physical processes contain information with distinct significance, affecting the goal in different manners<cit.>. For example, two sensors in Fig. 1 capture the state of a dynamic physical process (e.g., plant state) and a random process (e.g., environment change), respectively. These heterogeneous observations need to be transmitted in a timely manner for calculating control commands to ensure system stability. However, each observation has a different and non-trivial impact on this goal. Therefore, it is challenging to schedule multiple packets efficiently over a wireless channel with limited resources<cit.>. The controller is also expected to efficiently transmit control commands to the actuator based on the received packets, thereby accomplishing the goal. These motivate the exploration of the non-trivial relationship between the goal and heterogeneous observations/control commands and the development of the goal-oriented co-design in WNCSs. To evaluate the importance and significance of transmitted packets, various goal-oriented performance metrics such as the age of information (AoI)<cit.> and its variants<cit.>have been proposed. Based on these metrics, several sampling and scheduling policies have been explored to improve transmission efficiency for the communication resource-limited single-source system <cit.>. Furthermore, communication and control co-design has been investigated without considering the goal of WNCS and its interplay between communication and control actions<cit.>. Moreover, the coupling of sensing, communication, and control has yet to be explored in depth in the WNCS<cit.>. Therefore, further investigation is required for goal-oriented co-design in Wireless Networked Control Systems (WNCSs). Motivated by the above open issues, we consider a WNCS with multiple sensors to monitor heterogeneous physical processes and a remote controller to maintain the plant state over bidirectional imperfect links. To ensure the stability of WNCSs at minimal cost, we aim to reduce the number of control overshoots, with the consideration of heterogeneous action costs. Hence, the violation probability of the plant state against the preset threshold is adopted as the primary KPI. Then, the scheduler and controller are designed to address the following questions: What is the relationship between multiple observations/control packets with different significance and the specific goal? How do we jointly design the sensing, scheduling, and control strategies with limited resources to improve the tradeoff between violation probability and cost? The main contributions are as follows. * We first reveal the relationship between the WNCS goal and the actions of the scheduler/controller in a WNCS with bidirectional imperfect links. Goal-related dynamic state and Markov process-based context information are monitored simultaneously. Furthermore, the event-triggered controller is adopted to reduce the control cost. For thegoal of WNCSs, we investigate the significance of information that incorporates goal-oriented usefulness and contextual importance. However, the existing works either ignore the significance of the information concerning the goal and context<cit.> or focus on the case of single source with one-way communication only<cit.>. * CMDP-based scheduling and control problems are formulated to ensure the stability of WNCS with a given cost, in which state violation probability is minimized, and heterogeneous action costs are considered. The scheduler decides which sensor is scheduled or remains idle at each time slot based on the goal. The event-triggered controller decides whether to control based on the received plant state and context information. Nevertheless, only the scheduling scheme is proposed in the existing AoI-oriented system <cit.>. In addition, the consideration of goal in the previous co-design work is limited <cit.>. * A goal-oriented estimation, scheduling, and control co-design policy is proposed to improve the violation probability-cost tradeoff. The formulated CMDP is transformed into an MDP based on the analyzed structural results using the Lagrangian relaxation method. Then, a deterministic scheduling policy is proposed using the relative value iteration algorithm (RVIA) and bisection method. Furthermore, control-aware estimation, sensing-assisted control, and state-dependent retransmission policies are proposed for performance improvement. * The simulation results demonstrate that the proposed goal-oriented co-design policy can significantly reduce the violation probability and total cost compared to the existing work<cit.>. Moreover, the analysis of the impact of system parameters on the proposed algorithms in different scenarios is verified by simulation results. Notation: Scalar quantities are represented using lowercase letters. Vectors and matrices are denoted by bold lowercase and uppercase fonts, respectively. Sets are denoted using italic uppercase letters. Table I lists the notations. § RELATED WORK This section reviews the related work on goal-oriented communication and co-design in WNCSs. §.§ Goal-Oriented Communication To support real-time applications in WNCSs, wireless network key performance indicators (KPIs) such as throughput, delay, and packet drop rate have been optimized to enable ultra-reliable and low-latency communication<cit.>. However, the conventional KPIs ignore the semantic attributes of transmitted information, which treat all the packets equally and lead to less efficiency in achieving the specific goal<cit.>. To address this, AoI has been proposed to capture the information freshness, which is one of the critical semantic attributes<cit.>. Since AoI can mathematically characterize the impacts of packet loss and delay on the state estimation error<cit.>, many AoI-based sampling and scheduling policies have been proposed to facilitate remote estimation in WNCSs. In <cit.> and <cit.>, time slot-based and random access-based scheduling algorithms were proposed to minimize the average AoI, respectively. The sampling rate, computing scheduling, and transmit power were jointly optimized for the AoI minimization in WNCSs<cit.>. In <cit.>, a transmission scheduling policy was proposed to minimize the average number of transmissions subject to an average AoI constraint. The update interval was also optimized for minimizing AoI in random access-based Internet of Things (IoT) systems <cit.>. However, the aforementioned AoI-based work fails to evaluate packets based on the content and significance of information associated with the goal, even though the AoI can capture the timeliness ofpackets<cit.>. Therefore, various AoI variants, such as value of information<cit.>, age of incorrect information (AoII)<cit.>,urgency of information<cit.>, and age of actuation<cit.> have been proposed to quantify the significance of information based on the communication goal, context and semantic mismatch. In<cit.>, the weighted AoII and throughput were optimized jointly to support the coexistence of task-oriented and data-oriented communications. However, exploring the relationship between goal and communication policy remains unclear. Based on the significance and effectiveness of information, a goal-oriented sampling and communication policy was proposed to reduce real-time reconstruction error and the cost of actuation errors<cit.>. Overall, goal-oriented communication, the basis for pragmatic communication,has attracted a lot of attention<cit.>. Many semantic attributes have been introduced and adopted to design sampling and scheduling policies for a goal accomplishment<cit.>. However, the goal-oriented WNCS has not been sufficiently investigated, especially for the closed-loop WNCS with heterogeneous traffic. Scheduling multiple packets with different significance and contextual importance to the specific goal remains unexplored. §.§ Communication and Control Co-Design On the control side of WNCS, control law is designed to achieve specific control goals over wireless networks. For example, the steady-state error is minimized for automatic tracking, and the overshoot is minimized to avoid failure and system downtime<cit.>. To achieve specific control goals,the classical controller transmits control commands periodically. The event-triggered controller generates control actuation reactively when the plant state violates a certain threshold<cit.>. In addition to the different types of controllers mentioned above, several control policies have been proposed to achieve the desired stability and control performance, such as proportional-integral-derivativecontrol<cit.>, linear quadratic regulator control<cit.>, and model predictive control (MPC)<cit.>. However, network delay and loss in WNCSs may degrade the control performance and destabilize the system, which were not considered in the work above. More recently, communication and control co-design has garnered substantial attention, as it can eliminate the effects of limited throughput and poor link quality of wireless networks. In <cit.>, adynamic transmission scheduling strategy was proposed to optimize control performance by allocating network resources based on link quality predictions. Motivated by the tradeoff between control performance and energy cost,a state-dependent scheduling algorithm was proposed to find the optimal power allocation policy<cit.>. In <cit.>, a greedy state-error-dependent scheduling policy was proposed to achieve a minimum average linear quadratic cost. For saving energy, a novel co-design approach for distributed self-triggered control over wireless multi-hop networks was proposed<cit.>. In <cit.>, an integrated scheduling method of sensing, communication, and control for backhaul transmission was proposed for unmanned aerial vehicle (UAV)networks. The aforementioned work on communication and control co-design focused on either the control design over communication<cit.> or network resources allocation tailored for control performance<cit.>. However, the goal of WNCS and its interplay between communication and control actions were not analyzed. Moreover, the semantic attributes of transmitted information were not considered<cit.>. §.§ Goal-Oriented Co-Design Due to the limited resources and unclear interplay among multiple subsystems in a WNCS, it is still challenging to schedule sensing and control packets efficiently based on the significance of goal-related information<cit.>. Our previous work investigated the communication and control co-design for the WNCS with a single imperfect link and homogeneous action cost<cit.>. We have also introduced the age of loop to quantify the timeliness from the sensor to the actuator<cit.>. Similarly,a full loop AoI-based control and transmission co-design architecture was proposed for industrial cyber-physical systems<cit.>. In <cit.>, a WNCS with two-way imperfect communications was considered, and an AoI-aware co-design of scheduling and control was investigated. In <cit.>, a robust distributed MPC scheme based on AoI was established. In <cit.>, AoI was adopted for the event-triggered communication scheduler to manage the information updating process. In <cit.>, the data interarrival rate and code length were jointly designed for AoI-oriented IoT systems. Considering the goal of WNCS, an uplink–downlink transmission scheduling problem was addressed to minimize the mean square of error (MSE) of plant state in<cit.>. In <cit.>, an AoI-aware communication and control co-design scheme was proposed to improve the control performance. The existing AoI-based and MSE-based scheduling policies have illuminated the path for goal-oriented design, but only a single attribute of the information was considered, which may lead to inefficient transmission. In <cit.> and <cit.>,goal/task-oriented task offloading policies were proposed to attain a better tradeoff between computation latency and energy consumption. In <cit.>, a quality-of-service-oriented sensing-communication-control co-design scheme was proposed for UAVpositioning systems. In <cit.>, the features of tasks were considered, based on which a task-oriented prediction and communication co-design framework was proposed. Although recent advances in wireless sensing and control have facilitated the co-design of multiple subsystems, the coupling of sensing, communication, and control has not been explored in depth in the WNCS. Moreover, the interplay between the goal and observations/control commands has not been revealed clearly. Therefore, it is worth conducting further research on the goal-oriented co-design in WNCSs. § SYSTEM MODEL In this section, we introduce the considered WNCS, including the heterogeneous physical process model, wireless transmission, and control models. §.§ System Description We consider a WNCS with two-way communications and heterogeneous physical processes to be monitored,as shown in Fig. <ref>. The considered WNCS consists of two different sensors, a scheduler, a controller, and an actuator. This can be employed in emerging mission-critical applications such as load frequency control (LFC) in smart grid<cit.>. The specific information flow and assumptions of the considered WNCS are presented below. * We consider a time-slotted WNCS, where sensors obtain observations, and the controller generates control commands at the beginning of each time slot<cit.>. * The dynamic physicalprocess (e.g., plant state) and random process (e.g., environment change) are monitored by sensors S_a and S_b, respectively. * The scheduler selects one sensor and transmits its update to the controller through the wireless uplink channel. Non-orthogonal transmission is not considered in this paper. Due to the limited resources, we assume that only one sensor can be scheduled in each time slot<cit.>. * The controller is assumed to be event-triggered and sends the control command to the actuator<cit.> when the plant state violates the preset threshold<cit.>. * For the uplink (scheduler-controller link), we assume that the sensing packet is transmitted with one slot delay and probability of transmission error, without retransmission<cit.>. For the downlink (controller-actuator link), the control command with limited data size is assumed to be transmitted instantly with a transmission error probability, following <cit.>. * The receiver sends an acknowledgment (ACK)/negative ACK (NACK) for successful/failed transmissions instantly without error. §.§ Dynamic Process Model We consider heterogeneous traffic in the WNCS, includingdynamic physical process monitored bysensor S_a and random Markov process monitored by sensor S_b. These two sources affectthe goal of WNCS in different ways, leading to their different importance and significance. §.§.§ Linear-Time-Invariant (LTI System We consider the dynamic physical process (e.g., frequency deviation in LFC system) modeled by the discretized LTI system<cit.> 𝐱_k+1=𝐀𝐱_k+𝐁𝐮_k+𝐰_k,y_k=𝐂𝐱_k, where the subscript k represents the time index for plant state evolution. 𝐀∈ℝ^m× m and 𝐁∈ℝ^m× n represent the system matrix and input matrix, respectively, 𝐂∈ℝ^1× m is the measurement matrix. Besides, 𝐱_k∈ℝ^m×1 is the plant state, 𝐮_k∈ℝ^n×1 denotes the control input, andy_k is the measurement. 𝐰_k∈ℝ^m× 1 is the normally distributed noise with the distribution 𝒩(0,𝐑_𝐰), where 𝐑_𝐰 is the noise variance. To facilitate presentation, we ignore the measurement equationand focus on the state equation in the analysis part. The state is assumed to be fully measured. Generally, 𝐱represents the differencebetween the true value and the preset value of the considered process. Hence, the goal of the system is to maintain the state 𝐱close to 0. The sensor used to monitor the plant state is denoted as sensor S_a. §.§.§ Discrete-Time Markov Chain (DTMC) The physical dynamics in real systems may suffer from abrupt variations (e.g., environment changes). The state of the environment provides important context information, which may affect the sensitivity and tolerance of the equipment to the plant state. This random process can be modeled as a DTMC and is assumed to be ergodic<cit.>. The state of the environment is denoted by v and takes value from 0 and 1, as shown in Fig. <ref>.The transition matrix is described by P_i,j={v_k+1=j|v_k=i}, where v_k is the environment condition at time slot k. We have P_0,0=P_1,1=p̅ and P_1,0=P_0,1=p=1-p̅, where p and p̅ denote the self-transition and transition probabilities, respectively. The sensor used to monitor context information is denoted as sensor S_b. §.§ Wireless Transmission and Control Model We consider the wireless erasure channel inWNCSs,and assume that channel state remains unchanged within one time slot and varies independently among different slots. The packet error probability of wireless transmission is denoted as ϵ∈(0,1). The sensors generate status updates 𝐱_k and v_k by sampling the source at each time slot, where the sampling process is assumed to be without incurringany latency and cost<cit.>. To alleviate the effect of limited communication bandwidth, the controller is assumed to be event-triggered and generates control command 𝐮_k at the triggered time slot<cit.>. Also, we assume that the system is controllable but unstable without control, i.e., ρ(𝐀)>1, where ρ(𝐀) is the spectral radius of system matrix 𝐀<cit.>. The decision to sample and transmit for the sensor and controllerat time slot k is denoted by δ_o,k∈{1,0}, where o∈{a,b,c} representssensors S_a,S_b, and controller, respectively. Also, let ξ_o,k∈{1,0} representthe outcome of the transmission. Let ξ_o,k=1 indicate a successful transmission with probability [ξ_o,k=1|δ_o,k=1]=1-ϵ_o. § PROBLEM FORMULATION AND ANALYSIS This sectionpresents the evolution of the plant statewith respect to time slots andactions of the scheduler and controller. Then the goal of the considered system, i.e., violation probability, is analyzed, based on which a joint optimization problem is formulated. §.§ State Estimation by the Controller Recall thateach uplink transmission process takes a time slot to be performed. Hence, the controller receives an update from the sensor with one slot delay. To generate effective control commands, the controller must maintain accurate estimates of plant state and context information (e.g., environment/condition changes) at each time slot. §.§.§ LTI System Sensor S_a monitorsthe LTI-based plant state to obtain the goal-related value at time slot k. The plant state-estimator at the controller is given as 𝐱̂_k+1 = 𝐀𝐱_k+𝐁𝐮_k,δ_a,kξ_a,k=1,𝐀𝐱̂_k+𝐁𝐮_k, otherwise. Based on (<ref>), the estimation error of plant state 𝐞_k+1=𝐱_k+1-𝐱̂_k+1can be expressed as 𝐞_k+1 = 𝐰_k,δ_a,kξ_a,k=1,𝐀𝐞_k+𝐰_k, otherwise. Then we denote the AoI of the sensor S_a at time slot k byΔ_k,reflecting the information freshness of sensor S_a's packetsat the controller. Theupdating rule of AoI is as follows Δ_k+1 = 1,δ_a,kξ_a,k=1,Δ_k+1, otherwise, based on which the estimation error is further expressed as 𝐞_k+1=∑_i=1^Δ_k+1𝐀^i-1𝐰_k+1-i. Then the covariance of estimation error is given by Θ_k+1=∑_i=1^Δ_k+1𝐀^i-1𝐑_𝐰(𝐀^⊤)^i-1. §.§.§ Markov Process Sensor S_b monitors the Markov process-based environmentto obtain the context information. The context information-estimator at the controller is given as v̂_k+1 = v_k,δ_b,kξ_b,k=1,v̂_k, otherwise. We denote Υ_k+1 as the estimation quality of context information at the controller, i.e., (v_k+1, v̂_k+1). Specifically, Let Υ_k+1={1,2,3,4} represent the case of (v_k+1=0, v̂_k+1=0), (v_k+1=0, v̂_k+1=1), (v_k+1=1, v̂_k+1=0), (v_k+1=1, v̂_k+1=1), respectively. Thenthe controller feeds the estimation results back to the scheduler and computes the control command based on the received and estimated results. §.§ Control Design and Plant State Analysis The event-triggered controller starts to control when the plant state violates the preset value in a certain condition. As mentioned in Subsection II-B, we assume that the equipment may have different tolerances with different contexts (e.g., ζ_1 for the case of v=1 and ζ_0 for the case of v=0)to the same plant state<cit.>. Then, thecontrol input is given by 𝐮_k+1=𝐆𝐱̂_k+1,δ_c,k+1=1, 0, otherwise, where 𝐆=-𝐀/𝐁 represents the control gain<cit.>. δ_c,k+1 is an indicator of reactive control at time slot k+1, given by δ_c,k+1=𝕀{max{0,|𝐂𝐱̂_k+1|-ζ_1}>0},v̂_k+1=1,𝕀{max{0,|𝐂𝐱̂_k+1|-ζ_0}>0},v̂_k+1=0, where 𝕀(·) is the indicator function. Based on the LTI model in (<ref>) and control input in (<ref>), the plant state is given by 𝐱_k+2=𝐀(𝐱_k+1-𝐱̂_k+1)+𝐰_k+1,δ_c,k+1ξ_c,k+1=1,𝐀𝐱_k+1+𝐰_k+1, otherwise, where ξ_c,k+1 is the indicator of successfultransmission for the controller-actuator link at time slot k+1. Thus, the plant-state covariance, Φ_k=𝔼[𝐱_k𝐱_k^⊤], has the following updating rule Φ_k+2=𝐀Θ_k+1𝐀^⊤+𝐑_𝐰,δ_c,k+1ξ_c,k+1=1,𝐀Φ_k+1𝐀^⊤+𝐑_𝐰, otherwise. With successful downlink transmission,the covariance of plant state is given by 𝔼[𝐱_k+2𝐱^⊤_k+2]=∑_i=1^Δ_k+1𝐀^i𝐑_w(𝐀^⊤)^i+𝐑_w, whichcan be expressed as a non-decreasing function of sensor's AoI Δ_k. §.§ Goal of the WNCS We aim to minimize the impact of extreme plant state on the system performance and ensure the stability of WNCS. Particularly, we consider the impacts of goal-related state and context information onthe system performance simultaneously. Then we formulate an objective function to captureviolation events and reflect the significanceof the goal-related stateat the point of actuation with different contexts, given by J_k+2=𝕀{max{0,|𝐂𝐱_k+2|-ζ_1}>0},v_k+2=1,𝕀{max{0,|𝐂𝐱_k+2|-ζ_0}>0},v_k+2=0. This implies different significance of state information and context information,in relation to the goal. According to (<ref>) and (<ref>), it can be seen that frequent updating ofsensor S_a and triggering of control action contributes to the reduction of thegoal-related state, consequently reducing the associated violation probability. Moreover, accurate estimation of context information helps to avoid the occurrence of violations and excessive controls. Therefore, to minimize the averageviolation probability with the given cost, we formulate the following problem to jointly schedule multiple heterogeneous packets and design control inputs. 𝒫 1: min_Π,𝐔J=lim_K→∞1/K∑_k=1^K𝔼{J_k} s.t. δ_a,k+δ_b,k≤1, k∈[1,K],lim_k→∞1/K∑_k=1^K𝔼{c_k}≤c_max, where Π=[π_1,π_2,⋯,π_K] denotes the scheduling policy with π_k=(δ_a,k, δ_b,k) at time slot k. 𝐔=[𝐮_1,𝐮_2,⋯,𝐮_K] denotes the control input at each time slot. c_k=c_aδ_a,k+c_bδ_b,k+c_cδ_c,k is the total cost, wherec_a, c_a and c_c are the action cost for sensors S_a, S_b and the controller, respectively. Recall that δ_a,k and δ_b,k represent the indicators of updating actions forsensors S_a and S_b, respectively. Also, δ_c,k=1 indicates that the controller is triggered at time slot k. (14b) shows that at most one updating action is allowed in each time slot. (14c) limits the maximum allowed total cost c_max, including both the updating and control costs. § GOAL-ORIENTED DETERMINISTIC SCHEDULING POLICY In this section, we formulate 𝒫1 as a CMDP and reformulate it as an MDP by using Lagrangian relaxation method. Then we propose a deterministic scheduling policy based on samples and feedback estimation results at the scheduler. §.§ CMDP Formulation Recall that π denotes the scheduling policy that determines the action taken at each state. A stationary randomized policy is mapping from each state to a distribution over actions. Let J̅^π=lim_K→∞1/K∑_k=1^K𝔼_π{J_k} denote the averageviolation probability with the scheduling policy π. Let c̅^π=lim_k→∞1/K∑_k=1^K𝔼_π{c̃_k} denote the average number updating actions, where c̃_k=c_aδ_a,k+c_bδ_b,k is the scheduling cost. The cost is simplified first due to that we focus on the scheduling policy in this section and the control cost depends on the control policy. By integrating the constraint (14b) into the scheduling policy π, problem (<ref>) is equivalently cast as the CMDP 𝒫 2: min_π J̅^π s.t. c̅^π≤c_max. A typical CMDPconsists of the state space 𝒮, action space 𝒜, state transition matrix 𝒯 and reward ℛ, defined as * State: The state space contains two states and is defined as 𝒮={Δ, Υ}. Δ is the set of AoI for sensor S_a, which reflects the value of goal-related state and is defined as the time elapsed since the last successfully received sensor’s packet at the receiver. The plant state of the WNCS is a non-decreasing function of sensor S_a's AoI, as analyzed in Subsection III-B. Υ={1,2,3,4}, monitored by sensor S_b, is the set of estimation quality indicator of context, which also affects the value of WNCS's goal. The state of the CMDP at time k is s_k=(Δ_k,Υ_k)∈𝒮. * Action: The action space of the CMDP is defined as 𝒜={1,2,3}. With the deterministic scheduling policy π, the action at time k, i.e., α_k=π(s_k)∈𝒜, indicates the sensor S_a's transmission (α_k=1) or the sensor S_b's transmission (α_k=2)or being idle (α_k=3). * Transition matrix: The state-transition probability P(s'|s,α) is the probability that the state s at time slot k-1 transits to s' at time k with action α at time k-1. By omitting the subscript k, let s=(Δ,Υ=1) and s' denote the current and next state, respectively. The state-transition probability can be obtained as follows. P(s'|s,α)=ϵ_ap̅, ifα=1, s'=(Δ+1,Υ=1)ϵ_ap, ifα=1, s'=(Δ+1,Υ=3) (1-ϵ_a)p̅, ifα=1, s'=(1,Υ=1) (1-ϵ_a)p, ifα=1, s'=(1,Υ=3)p̅, ifα=2, s'=(Δ+1,Υ=1) p, ifα=2, s'=(Δ+1,Υ=3) p̅, ifα=3, s'=(Δ+1,Υ=1)p, ifα=3, s'=(Δ+1,Υ=3) 0,otherwise. Similarly, the state-transition probability for the other initial cases, i.e.,s=(Δ,Υ=2,3,4), can also be derived, as shown in Appendix A. * Reward function:ℛ_k is the instantaneous reward when it takes action α_k in state s_k, which maps a state-action pair to a real number. The CMDP has two cost functions: 1) scheduling cost, defined as c(α_k)=c̃_k, i.e., c(α_k)=c_a or c(α_k)=c_b if the scheduler makes a transmission attempt at slot k, otherwise c(α_k)=0, and 2) violation cost, defined as χ(s_k)=J_k, i.e., χ(s_k)=1 if the current state violates the threshold at slot k, otherwise χ(s_k)=0. §.§ Problem Reformulation and Analysis Since the problem in (15) is a CMDP, which is, in general, difficult to be solved<cit.>, we utilize the Lagrangian relaxation method to transform it to an unconstrained MDP<cit.>. Then the analysis of state space and reward function for the formulated MDP is provided. §.§.§ MDP formulation The Lagrangian relaxation method is adopted to transformthe CMDP to anMDP<cit.>. The Lagrangian function is defined as ℒ(π,λ)=lim_K→∞1/K∑_k=1^K𝔼_π{J_k+λc̃_k}-λc_max, where λ is the Lagrangian multiplier and the immediate cost is ℛ_k=J_k+λc̃_k. Then 𝒫 2 can be transformed as min_π∈Πℒ(π,λ), for any given λ≥0. Since λc_max is independent of the chosen policy π, 𝒫 2 is equivalent to solving the following optimization problem 𝒫 3: min_π∈Πh(λ,π)=min_π∈Πlim sup_K→∞1/K∑_k=1^K𝔼_π{J_k+λc̃_k}. Let π_λ^* denote an optimal policy that solves problem (18) for a given λ, which is called a λ-optimal policy. Then we provide the problem analysis for finding the optimal scheduling policy in the following subsections. §.§.§ Truncated State Space Since the formulatedMDP has an infinite number of state Δ, we approximate it by a truncated MDP with finite states. Based on (12) and (13), we can conclude thatJ_k is a bounded increasing function of state Δ. It implies that when the state Δ is larger than a certain threshold, J_k converges to 1 and remains constant. With the state threshold Δ_thr, the set of AoI is denoted by Δ={1,2,⋯,Δ_thr}. Then we have J_k=1 if Δ_k≥Δ_thr, and J_k=0 if Δ_k<Δ_thr, as provided in Proposition 1. Proposition 1:With the given LTI system, the state space of the MDP can be truncated and the state threshold is derived as Δ_thr=v={0,1}max{min{Δ∈ℕ^+|𝐂√(Φ(Δ))>ζ_v }}. Proof: Based on (12),the expectation of plant state can be calculated as 𝔼[|𝐱_k|]=√(Φ_k), and the plant-state covariance is a non-decreasing function of AoI. Hence, finding the minimal value of Δ that satisfies 𝐂|𝐱_k|>ζ_v is equivalent to solving the following equation, Δ_v,thr=min{Δ∈ℕ^+|𝐂√(Φ(Δ))>ζ_v }. By comparing Δ_0,thr and Δ_1,thr,the state threshold Δ_thr=max{Δ_0,thr,Δ_1,thr} can be derived. ▪ §.§.§ Reward Function Mapping Now we turn to calculating the reward mapping function in relation to the state and action. As shown in Subsection IV-A, the instantaneous reward function consists of scheduling cost and violation cost. The scheduling cost can be obtained directly based on the actions, i.e., c(α_k)=c_a or c(α_k)=c_b, if the scheduler makes a transmission attempt at slot k. The violation cost can be calculated based on (<ref>) and (<ref>), and then we have Definition 1 to calculate the reward function. Definition 1: The mapping function for the violation cost and state, χ(Δ_k,Υ_k)=J_k,reflects whether the current state results in a violation, which can be expressed as χ(Δ,Υ)= Δ{[ 0 0 0 0; ⋮ ⋮ 1 0; 0 0 1 1; ⋮ ⋮ ⋮ ⋮; 1 1 1 1 ]^Υ=1,2,3,4.. 19 Proof: When the current state satisfies Δ_k<Δ_0,thr, χ(Δ_k,Υ_k)=0 holds regardless of the transmission and is independent of Υ_k. In the case of Δ_k≥Δ_1,thr,χ(Δ_k,Υ_k)=1 for all the contexts. When Δ_0,thr<Δ_k<Δ_1,thr, χ(Δ_k,Υ_k)=1 for the case of Υ_k=3 and Υ_k=4. Also, due to the missing control command, χ(Δ_k,Υ_k)=1 holds for the case ofΔ_k=Δ_0,thr-1 with Υ_k=3. ▪ Hence, the reward function can be designed by mapping state-action pairs toreal numbers with given states and actions. The violation cost χ(Δ,Υ) can be reshaped as a matrix 𝐑_s of 4Δ_thr×1. Considering the scheduling cost c̃_k and Lagrangian multiplier λ, the reward function is expressed as 𝐑=[-𝐑_s-λ c_a, -𝐑_s-λ c_b, -𝐑_s ]. §.§ Deterministic Scheduling Policy This subsection proposes the deterministic scheduling policy based on the above analysis. By ensuring max{ϵ_a,ϵ_b,ϵ_c}<1/ρ^2(𝐀), there exists a stationary and deterministic optimal scheduling policy that can stabilize thecontrollableplant<cit.>. Then the challenging problem 𝒫 3 can be solved byusing the RVIA and bisection method iteratively <cit.>. §.§.§ RVIA for λ-Optimal Policy To obtain an optimal policy π_λ^* for a given λ, we solve the MDP via RVIA. There exists a relative value function V(s), s∈𝒮, that satisfies L̅^*(λ)+V(s)=min_α∈𝒜_s[L(s,α,λ)+∑_s'∈𝒮Pr(s'|s,α)V(s')], where L̅^*(λ) is the optimal value of the MDP problem for a given λ, defined as L̅^*(λ)=min_π∈ΠL̅(π,λ). Subsequently, the λ-optimal policy is obtained as π_λ^*(s)=min_α∈𝒜_s[L(s,α,λ)+∑_s'∈𝒮Pr(s'|s,α)V(s')],where the relative value function is updated as V^i(s)=z^i(s)-z^i(s^ref).Note that s^ref is an arbitrary reference state which remains unchanged throughout the iterations, and we have L̅(λ)=z(s^ref). The term z^i(s), called value function, is obtained at each iteration as z^i(s)=min_a∈𝒜_s[L(s,a,λ)+∑_s'∈𝒮Pr(s'|s,a)V(s')]. For a given λ-optimal policy (π^*_λ), the objective function of the CMDP problem, J̅^π^*_λ, and the objective function of the MDP problem, L̅^*(λ), are increasing with respect to λ. The constraint of the CMDP problem, δ̅^π^*_λ is decreasing in relation to λ. §.§.§ Bisection Search for λ To determine λ̃, the bisection algorithm is adopted by utilizing the monotonicity of δ̅^π^*_λ with respect to λ. We initialize the bisection algorithm with λ_u and λ_l in a manner that ensures δ̅^π^*_λ_u≤c_max and δ̅^π^*_λ_l≥c_max. Upon the termination of the bisection algorithm, i.e., the gap between λ_u and λ_l is smaller than the threshold κ, we set λ̃=λ_u and obtain the best feasible λ-optimal policy as π^*_λ̃=π^*_λ_u. Additionally, the algorithm returns the infeasible policy associated with λ_l, which serves as a lower-bound to an optimal solution. Subsequently, at each iteration of the process,a λ-optimal policy is developed for a given λ via the RVIA. Following this, λ is updated according to the bisection rule. The iterative procedure continues until the best λ-optimal policy among the feasiblepolicies is found, as summarized in Algorithm 1. §.§ Complexity Analysis The computational complexity of the proposed algorithm depends on the number of interactions and complexity of each iteration 𝒪(|𝒜||𝒮|^2). The state space size |𝒮| is approximately Δ_thrΥ_thr and the action space size |𝒜| is 3. Accordingly, the complexity of the deterministic policy is 𝒪(3I_1I_2Δ_thr^2Υ_thr^2), where I_1 and I_2, are, respectively, the iterations required in bisection and RVIA. § GOAL-ORIENTED ESTIMATION, COMMUNICATION AND CONTROL CO-DESIGN In this section, we investigate the goal-oriented co-design of estimation, communication, and control based on the scheduling policy proposed in the last section. Specifically, we propose control-aware estimation,sensing-assisted control, and state-dependent retransmission strategies to attain a lower violation probability with the same cost. §.§ Goal-Oriented Scheduling and Control Based on the deterministic scheduling policy proposed in Section IV-C, as well as the control input in Subsection III-B, we propose the goal-oriented scheduling and control(GSC) co-design strategy. The scheduler and controller are designed in sequence, thanks to their independence<cit.>.To further reduce the violation probability and cost, more advanced communication and control policies are introduced in the following subsections. The flow chart of the improved GSC scheme is shown in Fig. <ref>. §.§ Control-Aware Sensing/Estimation This subsection revises the state estimation process based on the control command. In subsection III-A, the plant state is estimated based on the assumption that the controller is triggered all the time, i.e., 𝐮=-𝐀/𝐁𝐱̂, as shown in (<ref>) and (<ref>). Hence,the estimated plant state without control information can be expressed as 𝐱̂_k+1 = 𝐀𝐞_k,δ_a,kξ_a,k=1,0, otherwise. Control-aware estimation (CAE): Due to that the controller is event-triggered, the control command is zero when the estimated plant state is smaller than the state threshold. Hence, we propose the CAE scheme based on the control information, as shown in Proposition 2. Proposition 2: By utilizing the information on whether the controller is triggered or not, theCAE scheme is able to improve the accuracy of the estimated state of the plant, given by 𝐱̂_k+1 = 𝐀𝐞_k,δ_a,kξ_a,k=1,δ_c,k+1=1, 𝐀𝐱_k,δ_a,kξ_a,k=1,δ_c,k+1=0, 0,δ_a,kξ_a,k=0, δ_c,k+1=1, 𝐀𝐱̂_k,δ_a,kξ_a,k=0,δ_c,k+1=0. Proof:With the control input 𝐮=-𝐀/𝐁𝐱̂, the estimated plant state at k+1 slot based on the successful/unsuccessful uplink transmission of sensor S_a atk slotis given by 𝐱̂_k+1= 𝐀𝐞_k and 𝐱̂_k+1=0, respectively. Moreover, the plant stateisrespectively estimated as 𝐱̂_k+1= 𝐀𝐱_k and 𝐱̂_k+1= 𝐀𝐱̂_k without control input. ▪ Remark 1: By comparing (<ref>) and (<ref>), it is shown that the CAE-based plant state is more accurate by considering the case of no control inputs, based on which a more reliable control input can be updated based on (<ref>). §.§ Sensing-Assisted Control The reactive control introduced in Subsection III-B has an inevitable drawback, that is the controller only reacts when the state has already violated the threshold, leading to a high violation probability. To address this problem, we propose two advanced control strategies to further improve the performance in this subsection. Proactive control (PC): The PC strategy is proposed to limit the plant state proactively,by predicting the future plant state at the next time slot without control input. In this case, the controller can transmit the control command in advance before the plant state violates the threshold. The control rule is given as δ_c,k+1=𝕀{max{0,|C𝐱̂_k+2|-ζ_1}>0},v̂_k+2=1,𝕀{max{0,|C𝐱̂_k+2|-ζ_0}>0},v̂_k+2=0, where 𝐱̂_k+2 = 𝐀𝐱̂_k+1 is the future plant state without control. Since ρ(𝐀)>1, the control rule proposed in (24)implies that the PC strategy is able to reduce the violation probability by inducing more control actions. However, the performance of the PC strategy depends on the accuracy of estimated plant state and contextual information at the next time slot, which may introduce more errors. Conservative reactive control (CRC): The proposed CRC strategy uses the conservative ratio to bring down the threshold to avoid violations. In this case, the plant state triggers the control action beforeviolating the preset threshold. The triggered condition for the controller in (9) can be revised as δ_c,k+1=𝕀{max{0,|C𝐱̂_k+1|-θζ_1}>0},v̂_k+1=1,𝕀{max{0,|C𝐱̂_k+1|-θζ_0}>0},v̂_k+1=0, where 0<θ<1 is the conservative ratio. It is shown that the CRC strategy depends on the selection of conservative ratio. A smaller θ results in a lower violation probability, but may lead to overcontrol and induce higher control cost. §.§ State-Dependent Downlink Retransmission In this subsection, we propose the state-dependent retransmission strategy to ensure reliable control, which may lead a low violation probability for WNCSs. As defined in Subsection II-C,the probability of unsuccessful downlink transmission is given by [ξ_c,k=0|δ_c,k=1]=1-ϵ_c, whereδ_c,k and ξ_c,k respectively represent the indicator of triggered control and successful downlink transmission at time slot k. Truncated automatic repeat request (TARQ): Recall that the control input at the actuator is zero when the downlink transmission error occurs. This leads to unstable control and high violation probability when the error probability is high. Hence, we adopt the TARQ transmission scheme with a limited number of maximum allowable retransmissions n_max, which improves the downlink transmission reliability by retransmitting the same packets, as shown in Fig. 4. The control input with n-1, n∈[1,⋯,n_max] retransmissions is calculated as 𝐮_k+n=-𝐀/𝐁𝐱̂_k+1. Then the plant state is given by 𝐱^ARQ_k+1+n=𝐀(𝐱_k+n-𝐱̂_k+1)+𝐰_k+n,ω_c,k+n=1,𝐀𝐱_k+n+𝐰_k+n, otherwise. where ω_c,k+n=δ_c,k+nξ_c,k+n is the indicator of downlink transmission. To further reduce thecost, we assume that the scheduler stops transmitting sensor states during downlink retransmission. However, retransmitting the historical control commands fails to capture plant state change and leads to inappropriateoperations. Hence, retransmission is not always beneficial in reducing the violation probability. Proposition 2: With n-1 retransmissions, the difference between the plant states obtained by TARQ and non-retransmission schemes is given by 𝔼[𝐃]= 𝔼[Φ_k+1+n]^ARQ-𝔼[Φ_k+1+n]^Non =(ϵ-ϵ^n)(𝐀Θ_k+n𝐀^⊤-𝐀Φ_k+n𝐀^⊤)+(1-ϵ^n)((𝐀^n+1-𝐀^2)Φ_k(𝐀^n+1-𝐀^2)^⊤), which isdecreasing withrespect to the error probability for the quasi-static plant but increasesfor the dynamic plant. Proof: Please refer to Appendix B. ▪ Proposition 2 implies thatthe optimal transmission policy depends on the error probability and the system matrix. § CASE STUDY AND SIMULATION RESULTS In this section, we consider the LFC system in smart grid as a case study. Also, we compare them with the existing work in terms of violation probability and cost. §.§ Parameters Setting The goal ofthe considered LFC system is to maintain the balance between power load and generation at minimal cost<cit.>. However, cyber-attacks can affect the frequency regulation, leading to power system instability. Sensors S_a and S_b are employed tomonitor the frequency deviation and detect any cyberattack, respectively<cit.>. In this paper, we focus on data transmission and assume that the sensor S_b has the ability to detect cyberattacks<cit.>. Based on the received packets that convey frequency deviation and context information, the controller guides the generator to minimize the frequency deviation. The grid may be unstable if the frequency deviation violates a given threshold, whichdepends on context information, i.e., whether there is a cyberattack. To verify the effectiveness of the proposed strategies, the following methods are compared as benchmarks. * Round-robin (RR) scheduling<cit.>: At each time slot, the scheduler periodically switches between sensor S_a and sensor S_b for updating. * Randomizedselection (RS) scheduling<cit.>: At each time slot, the scheduler randomly selects sensor S_a or sensor S_b for updating. * AoI-aware scheduling<cit.>: The scheduler selects one of the sensors when the sensor's AoI violates the preset value while ignoring contextual information. * AoII-aware scheduling<cit.>: The scheduler selects the sensor S_b whenincorrect DTMC status estimating occurs while ignoringthe goal. Otherwise the scheduler selects the sensor S_a or remains idle based on its AoI. Simulations are carried out via Monte Carlo simulations with 1,000,000 generated packets. To compare different algorithms, violation probability and normalized cost are adopted to indicateplant stability and its resource consumption. The violation probability is calculated by the probability of a violation occurring in all samples. Since we consider different action costs,the normalized total cost is the weighted sum of each action divided by the maximum cost, given by (δ_a*c_a+δ_a*c_b+δ_c*c_c)/(K*c_a+K*c_c). Also, 10 KHz bandwidth is allocated for sensingupdates with the packet size of 2 Kbits, using quadrature phase shift keying modulation<cit.>. For consistency in communication, control and plant evolution, one time slot is normalized to 1 second. Parameters are provided in Table II unless otherwise noted. §.§ Goal-oriented Co-Design Strategy Based on the parameters provided in Table II, the violation probability and normalized cost obtained by different algorithms are shown in Table III. We assume that the transmitter has knowledge of the channel and source statistics. Generally,the proposed goal-oriented co-design policies can significantly reduce the violation probability and normalized cost simultaneously, thanks to the consideration of goal and context information. As shown in Fig. <ref>, our proposed method can reduce the cost by remaining idle when the updating action is not necessary, and avoid violations by jointly considering goal-related state value and context information. Specifically,by adopting the CAE scheme, the violation probability is reduced (up to 40%) significantly with a slight increase in control cost (up to 5.5%). The estimation accuracy is improved and hence the control action is more efficient. Also, the sensing-aware control policies canfurther reduce the violation probability by slightly increasing the number of control actions. Compared to the benchmarks, the proposed GSC+CAE+CRC (use CRC hereafter) policy can reduce the violation probability by up to 96.7%, and reduce the total cost by up to 64.8%. §.§ Scenarios with Various Noise Variances and Downlink Error Probabilities We investigate the impacts of noise variance and error probabilityon the violation probability and total cost by different methods. As shown in Figs. <ref> and <ref>, the proposed policies can achieve the lowest violation probability (up to 44% reduction compared to RR) and consume the lowest total cost (up to 50% reduction compared to RR), especially with small noise variance and high error probability. With a small noise variance,accurate state estimation is obtained by the proposed PC and CRC methods,leading to efficient control and low violation probability. Moreover, the CRC method achieves a lower violation probability with a trivial increase in cost, as the PC methodrelies on the state estimation, which may not be accurate and results inunstable control. Fig. <ref> investigates the impact of downlink error probability, where the CRC scheme is compared with the non-retransmission scheme. It is shown that retransmission is not always beneficial for reducing violation probability. For thedynamic plant, the TARQ scheme achieves a higher violation probability, compared to the non-retransmission scheme. In the contrast, for thequasi-static plant (system matrix is set to 𝐀=[0.081.10; 000.25; 001.1]), the retransmission scheme can obtain a lower violation probability, as stated in Proposition 2. Furthermore, the gap between these two schemes is increasing with respect to the error probability. This is because the higher the error probability, the more frequently the TARQ scheme is triggered, leading to worse and better performance in dynamic and quasi-static scenarios, respectively. §.§ Scenarios with Different Cost Constraints, Action Costs andError Probabilities for Uplink Transmission Figs. <ref> and <ref> show the optimized scheduling policy, as well as the associated violation probabilities and costs with different cost constraints and action costs. It is concluded that with a larger cost constraint, the Lagrangian multiplier is smaller, which leads to a more active scheduling policy and a lower violation probability, as analyzed in Subsection IV-C. Similarly, a lower c_b increases the potential of transmitting sensor S_b in the proposed scheduling policy. Interestingly, both violation probability and total cost decrease with respect to the sensor S_b's cost. This is due to that scheduling sensor S_b too frequentlymay lead to insufficient goal-related state information and lack of control. Fig. <ref>presents the impact of uplink error probability on the violation probability. It is demonstrated that the proposed co-design scheme achieves the best performance compared to the existing work. Also, the proposed methods are robust to the transmission error. With the proposed CRC method, the violation probability remains low even with a relatively high error probability (e.g., ϵ_a=0.6, ϵ_b=0.4), thanks to accurate remote estimation and robust control. Fig. <ref> compares the proposed two methods with different conservative ratios. It is shown that the violation probability obtained by the CRC-based method is monotonically increasing with respect to the conservative ratio, while the total cost showsthe opposite trend. This is due to that with a smaller conservative ratio, the CRC-based method is more sensitive to the goal-related plant state, leading to more frequent control. Fig. <ref> also implies the tradeoff between violation probability and cost. Appropriate conservative ratios can be selected according to different situations and requirements. § CONCLUSION In this paper, we have investigated the goal-oriented estimation, scheduling, and control for the WNCS with heterogeneous sources and two imperfect links. In particular, both goal-related state value and context information are monitored by multiple sensors and transmitted to the event-triggered controller for calculating control commands. We have provided a comprehensive analysis of the goal of WNCS with respect to the scheduling and control actions and revealed their relationships. Then,the co-design of scheduling and control has been formulated as a CMDP and simplified as an MDP using the Lagrangian relaxation method. Based on the structural results of MDP, we have proposed a goal-oriented deterministic scheduling policy to minimize the violation probability with the cost constraints. Furthermore, control-aware estimation and sensing-assisted control policies have been proposed to reduce the violation probability further. To further improve the transmission reliability and enhance the control stability, we have compared the retransmission scheme with the non-retransmission case and analyzed their preference scenarios. Simulation results have shown that our proposed methods can achieve up to 96.7% violation probability reduction and up to 64.8%cost reduction compared to the existing scheduling methods. Moreover, the impacts of system parameters on the scheduling policy and the optimized violation probability and cost have been investigated comprehensively. Simulation results have demonstrated that the proposed methods are robust to the transmission error. In the future, we will consider a more practical WNCS with two-way random delay and imperfect feedback links. Furthermore,more heterogeneous traffic sources and advanced control strategies will be included in future extensions of this work. § STATE TRANSITION PROBABILITIES With s=(Δ,Υ=2),the transition probability is given by P(s'|s,α)=ϵ_ap̅, ifα=1, s'=(Δ+1,Υ=2)ϵ_ap, ifα=1, s'=(Δ+1,Υ=4) (1-ϵ_a)p̅, ifα=1, s'=(1,Υ=2) (1-ϵ_a)p, ifα=1, s'=(1,Υ=4) (1-ϵ_b)p̅, ifα=2, s'=(Δ+1,Υ=1) (1-ϵ_b)p, ifα=2, s'=(Δ+1,Υ=3)ϵ_bp̅, ifα=2, s'=(Δ+1,Υ=2)ϵ_bp, ifα=2, s'=(Δ+1,Υ=4) p̅, ifα=3, s'=(Δ+1,Υ=2 )p, ifα=3, s'=(Δ+1,Υ=4) 0,otherwise. With s=(Δ,Υ=3),the transition probability is given by P(s'|s,α)=ϵ_ap̅, ifα=1, s'=(Δ+1,Υ=3)ϵ_ap, ifα=1, s'=(Δ+1,Υ=1) (1-ϵ_a)p̅, ifα=1, s'=(1,Υ=3) (1-ϵ_a)p, ifα=1, s'=(1,Υ=1) (1-ϵ_b)p̅, ifα=2, s'=(Δ+1,Υ=4) (1-ϵ_b)p, ifα=2, s'=(Δ+1,Υ=2)ϵ_bp̅, ifα=2, s'=(Δ+1,Υ=3)ϵ_bp, ifα=2, s'=(Δ+1,Υ=1) p̅, ifα=3, s'=(Δ+1,Υ=3 )p, ifα=3, s'=(Δ+1,Υ=1) 0,otherwise. With s=(Δ,Υ=4),the transition probability is given by P(s'|s,α)=ϵ_ap̅, ifα=1, s'=(Δ+1,Υ=4)ϵ_ap, ifα=1, s'=(Δ+1,Υ=2) (1-ϵ_a)p̅, ifα=1, s'=(1,Υ=4) (1-ϵ_a)p, ifα=1, s'=(1,Υ=2) (1-ϵ_b)p̅, ifα=2, s'=(Δ+1,Υ=4) (1-ϵ_b)p, ifα=2, s'=(Δ+1,Υ=2)ϵ_bp̅, ifα=2, s'=(Δ+1,Υ=4)ϵ_bp, ifα=2, s'=(Δ+1,Υ=2) p̅, ifα=3, s'=(Δ+1,Υ=4 )p, ifα=3, s'=(Δ+1,Υ=2) 0,otherwise. § PROOF OF PROPOSITION 2 Suppose that sensor S_a is scheduled at time slot k. Then we have x_k and x̂_k+1. After n-1retransmissions, the plant state is given by 𝐱^ARQ_k+1+n =𝐀𝐞_k+n+𝐰_k+n+(𝐀^n+1-𝐀^2)𝐱_k. The plant-state covarianceis given by Φ_k+1+n^ARQ=𝐀Θ_k+n𝐀^⊤+𝐑_𝐰+ Σ_k,n+1,ω_c,k+n=1,𝐀Φ_k+n𝐀^⊤+𝐑_𝐰, otherwise, where Σ_k,n+1=(𝐀^n+1-𝐀^2)Φ_k(𝐀^n+1-𝐀^2)^⊤. The expectation is given by 𝔼[Φ_k+1+n]^ARQ= (1-ϵ^n)Z_1+ϵ^nZ_2. where Z_1=𝐀Θ_k+n𝐀^⊤+𝐑_𝐰+Σ_k,n+1, and Z_2=𝐀Φ_k+n𝐀^⊤+𝐑_𝐰. In comparison, with n-1 transmission under non-retransmission policy, we have 𝐱_k+1+n^Non=𝐀𝐞_k+n+𝐰_k+n. Then the expectation of state covariance is given by 𝔼[Φ_k+1+n]^Non= (1-ϵ)(Z_1-Σ_k,n+1)+ϵ Z_2. Based on (<ref>) and (<ref>), the gap between two schemes can be obtained, asin (<ref>). Then we analyze the relationship between the gap and system matrix as well as the error probability. First, for the quasi-static plant, i.e., ρ(𝐀)→1, we have Σ_k,n+1→0. In this case, the second term in (<ref>) approaches 0. Hence, 𝔼[D]<0 holds since (𝐀Θ_k+n𝐀^⊤-𝐀Φ_k+n𝐀^⊤)=-𝐀^n+1Φ_k𝐀^n+1^⊤<0 is true based on(<ref>) and (<ref>). Furthermore, with a larger error probability, (ϵ-ϵ^n) is larger, leading to a smaller 𝔼[D]. For the dynamic plant, i.e., ρ(𝐀)≫1, we have Σ_k,n+1≈𝐀^n+1Φ_k𝐀^n+1^⊤. Therefore, we have𝔼[D]>0 based on the fact that Σ_k,n+1>0 and ϵ<1. Moreover, in this case, a larger gap is obtained with a larger error probability. This is due to that a larger error probability also leads to a larger number of retransmissions, resulting in a larger Σ_k,n+1 and hence a larger gap. 1 IEEEtran globecom23_CJ J. Cao, E. Kurniawan,A. Boonkajay and S. Sun, “Goal-Oriented Scheduling and Control Co-Design in Wireless Networked Control Systems,”in proc. IEEE Globecom, KL, Malaysia, Dec. 2023. 8166737 P. Park, S. Coleri Ergen, C. Fischione, C. Lu and K. H. Johansson, “Wireless Network Design for Control Systems: A Survey,” IEEE Commun. Surv. & Tutor., vol. 20, no. 2, pp. 978-1013, Secondquarter 2018. 10012674 J. Cao, X. Zhu, S. Sun, P. Popovski, S. Feng and Y. Jiang, “Age of Loop for Wireless Networked Control System in the Finite Blocklength Regime: Average, Variance and Outage Probability,”Trans. Wirel. Commun., vol. 22, no. 8, pp. 5306-5320, Aug. 2023. secon2023 P. Kutsevol, O. Ayan, N. Pappas and W. Kellerer, “Experimental Study of Transport Layer Protocols for Wireless Networked Control Systems,” IEEE SECON, Sep. 2023. 2023arXiv230304908F E. Fountoulakis, N. Pappas, and M. Kountouris, “Goal-oriented Policies for Cost of Actuation Error Minimization in Wireless Autonomous Systems,” IEEE Commun. Lett., vol. 27, no. 9, 2023. 8558500 G. Zhao, M. A. Imran, Z. Pang, Z. Chen and L. Li, “Toward Real-Time Control in Future Wireless Networks: Communication-Control Co-Design,” IEEE Commun. Mag., vol. 57, no. 2, pp. 138-144, Feb. 2019. 6195689 S. Kaul et al. “Real-Time Status: How Often Should One Update?” in proc. IEEE INFOCOM,Orlando, FL, USA, Mar. 2012, 8845114 J. P. Champati, at al, “Performance Characterization Using AoI in a Single-loop Networked Control System,”inproc. IEEE INFOCOM Workshops, Paris, France, 2019. 9024463 G. Stamatakis, N. Pappas, A. Traganitis, “Control of Status Updates for Energy Harvesting Devices that Monitor Processes with Alarms,“ IEEE GLOBECOM Workshops, Waikoloa, HI, USA, Dec. 2019. 9137714 A. Maatouk, S. Kriouile, M. Assaad and A. Ephremides, “The Age of Incorrect Information: A New Performance Metric for Status Updates,” IEEE/ACM Trans. Netw., vol. 28, no. 5, 2020. 8995639 H. Pan and S. C. Liew, “Information Update: TDMA or FDMA?,” IEEE Wirel. Commun. Lett., vol. 9, no. 6, Jun. 2020. 9162973 H. Chen, Y. Gu and S. -C. Liew, “Age-of-Information Dependent Random Access for Massive IoT Networks,” in proc. IEEEINFOCOM Worskhops, Toronto, ON, Canada, 2020. 9836031 G. Zhang, C. Shen, Q. Shi, B. Ai and Z. Zhong, “AoI Minimization for WSN Data Collection With Periodic Updating Scheme,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, 2023. 2022arXiv220708996S Sadeghi Vilni, S., Moltafet, M., Leinonen, M., and Codreanu, M., “Multi-Source AoI-Constrained Resource Minimization under HARQ: Heterogeneous Sampling Processes,” arXiv e-prints,doi:10.48550/arXiv.2207.08996, 2022. 9695972 Y. H. Bae and J. W. Baek, “Age of Information and Throughput in Random Access-Based IoT Systems With Periodic Updating,” IEEE Wirel. Commun. Lett., vol. 11, no. 4, 2022. 9146773 X. Zheng, S. Zhou and Z. Niu, “Urgency of Information for Context-Aware Timely Status Updates in Remote Control Systems,” IEEE Trans.Wirel. Commun., vol. 19, no. 11, Nov. 2020. 2023arXiv230300507N A. Nikkhah, A. Ephremides and N. Pappas, "Age of Actuation in a Wireless Power Transfer System," IEEE INFOCOM Workshops, 2023. 10181305 R. Li, C. Huang, X. Qin, S. Jiang, N. Ma and S. Cui, “Coexistence between Task-and Data-Oriented Communications: A Whittle’s Index Guided Multi-Agent Reinforcement Learning Approach,”IEEE InternetThings J., Early Access, 2023. 9551200 N. Pappas and M. Kountouris, “Goal-Oriented Communication For Real-Time Tracking In Autonomous Systems,” inproc. IEEE ICAS, 2021. ICCW23 M. Salimnejad, M. Kountouris and N. Pappas, “Real-time Remote Reconstruction of a Markov Source and Actuation over Wireless,” in proc. IEEE ICC Workshops, Rome, Italy, 2023. JCN23 M. Salimnejad, M. Kountouris and N. Pappas, “State-aware real-time tracking and remote reconstruction of a Markov source,” Journal of Communications and Networks, vol. 25, no. 5, Oct. 2023. 9478879 L. Scheuvens, T. Hößler, P. Schulz, N. Franchi, A. N. Barreto and G. P. Fettweis, “State-Aware Resource Allocation for Wireless Closed-Loop Control Systems,” IEEE Trans. Commun., vol. 69, no. 10, Oct. 2021. 8629300 B. Cheng and Z. Li, “Coordinated Tracking Control With Asynchronous Edge-Based Event-Triggered Communications,” IEEE Trans. Automat. Contr., vol. 64, no. 10, Oct. 2019. 9369024 M. Balaghiinaloo, D. J. Antunes, M. H. Mamduhi and S. Hirche,“Decentralized LQ-Consistent Event-Triggered Control Over a Shared Contention-Based Network,”Trans. Automat. Contr., vol. 67, no. 3, 2022. 10004994 W. Luo, P. Lu, H. Liu and C. Du, “Event-Triggered Networked Predictive Output Tracking Control of Cyber–Physical Systems With Model Uncertainty and Communication Constraints,”IEEE Trans.Circuits and Systems II: Express Briefs, vol. 70, no. 6, June 2023. 9690022 Y. Ma et al., “Optimal Dynamic Transmission Scheduling for Wireless Networked Control Systems,”IEEE Trans. Contr. Syst. Tech., vol. 30, no. 6, 2022. 9052443 L. An and G. -H. Yang, “Optimal Transmission Power Scheduling of Networked Control Systems Via Fuzzy Adaptive Dynamic Programming,”IEEE Trans.Fuzzy Syst., vol. 29, no. 6, 2021. 9614347 L. Chen et al., “Control-Aware Transmission Scheduling for Industrial Network Systems Over a Shared Communication Medium,” IEEE InternetThings J., vol. 9, no. 13, 2022. 8734802 D. Baumann, F. Mager, M. Zimmerling and S. Trimpe, “Control-Guided Communication: Efficient Resource Arbitration and Allocation in Multi-Hop Wireless Control Systems,”IEEE Contr. Syst. Lett., vol. 4, no. 1, 2020. 9729746 B. Chang, W. Tang, X. Yan, X. Tong and Z. Chen, “Integrated Scheduling of Sensing, Communication, and Control for mmWave/THz Communications in Cellular Connected UAV Networks,” IEEE J.Sel. AreasCommun., vol. 40, no. 7, 2022. 10034831 X. Lu et al., “Full-loop AoI Based Joint Design of Control and Deterministic Transmission for Industrial CPS,” IEEE Trans. Industr. Inform., Early Access, 2023. 9493202 A. M. Girgis et al., “Predictive Control and Communication Co-Design via Two-Way Gaussian Process Regression and AoI-Aware Scheduling,” IEEE Trans.Commun., vol. 69, no. 10, Oct. 2021. 9599512 J. Hahn, R. Schoeffauer, G. Wunder and O. Stursberg, “Using AoI Forecasts in Communicating and Robust Distributed Model-Predictive Control,” IEEE Trans. Contr.Netw. Syst., vol. 9, no. 2, 2022. 9807392 C. Lin et al., “Event-Triggered Load Frequency Control Based on Age-of-Information,” IEEE Trans.Power Syst., vol. 38, no. 3, 2023. 9305697 X. Wang, C. Chen, J. He, S. Zhu and X. Guan, “AoI-Aware Control and Communication Co-Design for Industrial IoT Systems,” IEEE InternetThings J., vol. 8, no. 10, 2021. 8865111 K. Huang, W. Liu, Y. Li, B. Vucetic and A. Savkin, “Optimal Downlink–Uplink Scheduling of Wireless Networked Control for Industrial IoT,” IEEE Internet of Things J., vol. 7, no. 3, 2020. 10202236 X. Wang, J. Zhang, C. Chen, J. He, Y. Ma and X. Guan, “Trust-AoI-Aware Codesign of Scheduling and Control for Edge-Enabled IIoT Systems,” IEEE Trans. Industr. Inform., Early Access, 2023. 9690057 J. Cao et al., “Independent Pilots Versus Shared Pilots: Short Frame Structure Optimization for Heterogeneous-Traffic URLLC Networks,” IEEE Trans. Wirel. Commun., vol. 21, no. 8, Aug. 2022. 9955525 D. Gündüz et al., “Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications,” IEEE J. Sel. Areas Commun., vol. 41, no. 1, Jan. 2023. petar_arxiv Gündüz, D., Chiariotti, F., Huang, K., Kalør, A. E., Kobus, S., and Popovski, P., “Timely and Massive Communication in 6G: Pragmatics, Learning, and Inference,” arXiv e-prints, doi:10.48550/arXiv.2306.17580. 9165797 M. Qin et al., “Service-Oriented Energy-Latency Tradeoff for IoT Task Partial Offloading in MEC-Enhanced Multi-RAT Networks,”IEEE Internet Things J., vol. 8, no. 3, 2021. 10138567 K. Wang, D. Niyato, W. Chen and A. Nallanathan, “Task-Oriented Delay-Aware Multi-Tier Computing in Cell-Free Massive MIMO Systems,” IEEE J.Sel. Areas Commun., vol. 41, no. 7, 2023. 10005612 Z. Wang, R. Liu, Q. Liu, L. Han, Y. Wu and J. S. Thompson, “QoS-Oriented Sensing–Communication–Control Co-Design for UAV-Enabled Positioning,”IEEE Trans. Green Commun. Netw., vol. 7, no. 1, 2023. 10049657 B. Kizilkaya, C. She, G. Zhao and M. A. Imran, “Task-Oriented Prediction and Communication Co-Design for Haptic Communications,” IEEE Trans.Veh. Technol., vol. 72, no. 7, July 2023. 9382948 S. Kuppusamy, Y. H. Joo and H. S. Kim, “Asynchronous Control for Discrete-Time Hidden Markov Jump Power Systems,”IEEE Trans. Cybern., vol. 52, no. 9, Sep. 2022. TSG2949998 A. S. Musleh, G. Chen and Z. Y. Dong, “A Survey on the Detection Algorithms for False Data Injection Attacks in Smart Grids,” IEEE TransSmart Grid, vol. 11, no. 3, May 2020. 3GPP_service Service Requirements for Cyber-Physical Control Applications in Vertical Domains; Stage-1 (Release 17), document TS 22.104, 3GPP, Jun. 2019. | http://arxiv.org/abs/2312.16061v2 | {
"authors": [
"Jie Cao",
"Ernest Kurniawan",
"Amnart Boonkajay",
"Nikolaos Pappas",
"Sumei Sun",
"Petar Popovski"
],
"categories": [
"eess.SY",
"cs.SY",
"eess.SP"
],
"primary_category": "eess.SY",
"published": "20231226141555",
"title": "Goal-Oriented Communication, Estimation, and Control over Bidirectional Wireless Links"
} |
[email protected] NTT Communication Science Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa 243-0198, Japan NTT Research Center for Theoretical Quantum Information, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa 243-0198, Japan 0 Measurement-based quantum computation is a universal quantum computing model that proceeds by adaptively measuring qubits of a resource state one by one. There exist two types of universality: strict and computational universalities. The former means that any unitary operator can be implemented with an arbitrary high accuracy. The latter cannot do it but can generate the output probability distribution of any quantum circuit with an arbitrary high accuracy. It is well known that the former is stronger than the latter. In this letter, we give a method for transforming from a certain type of computationally-universal measurement-based quantum computation to the strictly-universal one. Our method simply replaces a single qubit in the resource state of the computational-universal one with a Pauli-Y eigenstate. As an application of our tool, we show that hypergraph states can be made strictly universal with only Pauli measurements, while only computationally-universal hypergraph states were known so far. Our results imply that the hardness of realizing strictly-universal quantum computers is almost the same as that of realizing computationally-universal one. There exist two types of universality in measurement-based quantum computation (MBQC): strict and computational universalities. It is well known that the former is stronger than the latter. In this paper, we give a method of transforming from a certain type of computationally-universal MBQC to the strictly-universal one. Our method simply replaces a single qubit in a resource state with a Pauli-Y eigenstate. We apply our method to show that hypergraph states can be made strictly universal with only Pauli measurements, while only computationally-universal hypergraph states were known so far.Catalytic Transformation from Computationally-Universal to Strictly-Universal Measurement-Based Quantum Computation Yuki Takeuchi===================================================================================================================Quantum computers solve several problems faster than classical computers with the best known classical algorithms <cit.>. Driven by this advantage, tremendous effort was devoted to realize quantum computers, and several quantum computing models were proposed such as quantum circuit model <cit.>, measurement-based quantum computation (MBQC) <cit.>, and adiabatic quantum computation <cit.>. Although these models have unique features, respectively, they are the same in light of the computational capability (i.e., what problems can be solved in polynomial time). More concretely, these models can perform “any" quantum computation, and hence they are called universal quantum computing models.There exist two types of universality in quantum computation <cit.>. One is strict universality, which is the strongest notion of universality. It means that any unitary operator can be implemented with an arbitrary high accuracy, and hence any quantum state can also be generated. However, a restricted class of unitary operators is sufficient to generate the output probability distribution of any quantum circuit with an arbitrary high accuracy <cit.>. Therefore, we can define a weaker notion of universality, which is so-called computational universality. To clarify the difference between these two notions, let us consider the quantum circuit model with n initialized input qubits |0^n⟩. In this model, the universality is determined by gate sets. {H, T, Λ(Z)} and {H, Λ(S)} are examples of strictly-universal gate sets. Here, H≡|+⟩⟨ 0|+|-⟩⟨ 1|, where |±⟩≡(|0⟩±|1⟩)/√(2), is the Hadamard gate, T≡|0⟩⟨ 0|+e^iπ/4|1⟩⟨ 1| is the T gate, Λ(U)≡|0⟩⟨ 0|⊗ I+|1⟩⟨ 1|⊗ U is the controlled-U gate for any single-qubit unitary operator U, I is the two-dimensional identity gate, Z≡ T^4 is the Pauli-Z gate, and S≡ T^2 is the S gate. On the other hand, real unitary operators are sufficient to construct the computationally-universal gate set {H, CCZ} <cit.>, where CCZ≡ I^⊗ 3-2|111⟩⟨ 111| is the controlled-controlled-Z (CCZ) gate. By definition, it is trivial that strictly-universal gate sets are also computationally universal, but the opposite does not hold. The computationally-universal gate set {H, CCZ} is insufficient to realize quantum states whose amplitudes include imaginary numbers. For example, the n-qubit quantum state |ψ_t⟩=(|0^n⟩+i|1^n⟩)/√(2) cannot be generated with the fidelity larger than 1/2. This is because any quantum state generated by applying H and CCZ gates to |0^n⟩ is written as |ϕ_r⟩=∑_z∈{0,1}^nc_z|z⟩ with real numbers {c_z}_z∈{0,1}^n satisfying ∑_z∈{0,1}^nc_z^2=1, and hence |⟨ψ_t|ϕ_r⟩|^2=(c_0^n^2+c_1^n^2)/2≤1/2.As mentioned above, MBQC is a universal quantum computing model <cit.>. It proceeds by adaptively measuring qubits of a resource state one by one. Its universality is determined by a given resource state (and available measurement bases). For the both kinds of universality, several resource states were proposed. Cluster states <cit.>, Affleck-Kennedy-Lieb-Tasaki (AKLT) states <cit.>, and parity-phase graph states <cit.>, which are weighted graph states <cit.>, are common examples of strictly-universal resource states. Meanwhile, for the computational universality, several hypergraph states were found, and they require only Pauli measurements <cit.>. As with the quantum circuit model, strictly-universal resource states are trivially computationally universal, but the opposite is not true. Despite the importance of understanding the hardness of realizing strictly-universal quantum computers, the gap between these two types of universality was less explored in MBQC.In this paper, we give a method of transforming to the strictly-universal MBQC from any computationally-universal one that can exactly implement H and CCZ. Our method is quite simple such that it just replaces a qubit in a resource state of the computationally-universal MBQC with a Pauli-Y eigenstate |+i⟩≡(|0⟩+i|1⟩)/√(2). Since the added |+i⟩ works like a catalyst in chemistry, we call our transformation catalytic transformation. In fact, |+i⟩ realizes the strictly-universal MBQC, while it is invariable during the MBQC (see Fig. <ref> (b)). As an advantage of our transformation, the required measurement bases are the same before and after the transformation. To devise our method, we first show that |1⟩ can be deterministically prepared by applying H and CCZ gates to |000⟩. Then, we also show that S is deterministically applicable to any quantum state |ψ⟩ by applying H and CCZ gates to |1⟩|+i⟩|ψ⟩. As an important point, |+i⟩ is not consumed when we implement the S gate, and hence it can be repeatedly used to apply a number of the S gates. In summary, by using our method, we obtain the strictly-universal MBQC that can perform any quantum computation composed of {H,S,CCZ}.As a weakness of our method, the S gates cannot be applied in parallel because each S requires a single |+i⟩, but the transformed resource state includes only a single |+i⟩. Toward relaxing this weakness, we also propose a method of duplicating |+i⟩. Its construction is inspired by the |T⟩-catalyzed |CCZ⟩→2|T⟩ factory (C2T factory) <cit.>, which is a technique in quantum error correction. Here, |T⟩≡ T|+⟩ and |CCZ⟩≡ CCZ(|+⟩^⊗ 3) are magic states <cit.> for the T and CCZ gates, respectively. The C2T factory outputs |T⟩^⊗ 3 by applying Clifford gates to |CCZ⟩|T⟩. In keeping with terminologies in Refs. <cit.>, the third output qubit is called a catalyst. As an interesting point, the third output qubit can be used as an input of the next C2T factory, and hence we can reinterpret it as a method of generating |T⟩^⊗ 2 from |CCZ⟩. We propose a similar technique for the magic state |+i⟩ of the S gate. By using it, we generate |+i⟩^⊗ 2 by applying H and CCZ gates to |1⟩|0⟩|+i⟩, and the second output qubit can be similarly used as an input of the next duplication.To concretely reveal the usefulness of our method, we apply our transformation to the MBQC with the hypergraph state in Ref. <cit.>. The MBQC in Ref. <cit.> realizes the computationally-universal quantum computation with only Pauli-X and Z basis measurements. In any MBQC with hypergraph states, |+i⟩ can be prepared by a measurement in the Pauli-Y basis. Therefore, our transformation shows that there exist strictly-universal hypergraph states with measurements in the Pauli-X, Y, and Z bases, while as far as we know, only computationally-universal hypergraph states were known for Pauli measurements.Strictly-universal quantum computation with {H,S,CCZ}.—As a preliminary to our main result, we show that the gate set {H,S,CCZ} is sufficient for strictly-universal quantum computation. A set 𝒢 of quantum gates is called strictly universal if there exits a positive constant n_0 such that the subgroup of unitary operators generated by 𝒢 is dense in the special unitary group SU(2^n) for any natural number n≥ n_0 <cit.>. Simply speaking, by combining quantum gates in a strictly-universal gate set, any unitary operator can be constructed with an arbitrary high accuracy. In Ref. <cit.>, {H,Λ(S)} was shown to be a strictly-universal gate set with n_0=2. With this fact in mind, it is sufficient for our purpose to give a decomposition of Λ(S) in terms of H, S, and CCZ gates. We give the decomposition in Fig. <ref>, and the equality in Fig. <ref> can be shown as follows: The quantum circuit in the right-hand side of Fig. <ref> applies HZ^jkHSHZ^jkH=X^jkSX^jk to the third qubit |0⟩ when the state of the first and second qubits is |jk⟩ with j,k∈{0,1}. Here, X≡|1⟩⟨0|+|0⟩⟨ 1| is the Pauli-X gate. It means that |0⟩ becomes i|0⟩ if and only if j=k=1, and it does not change in other cases. It is trivial that for any j and k, the state |jk⟩ of the first and second qubits is invariable in this quantum circuit.Therefore, the quantum circuit in the right-hand side has the same function as the controlled-S gate on the first and second qubits.Main result.—Resource states for MBQC consist of three sections: an input section 𝒞_I, a body 𝒞_M, and an output section 𝒞_O <cit.>. For any natural number n, let V_n be any n-qubit unitary operator composed of {H,CCZ}. In this paper, we call the MBQC computationally universal if and only if for any n≥ n_0, V_n can be applied on 𝒞_O by measuring all qubits in 𝒞_I∪𝒞_M one by one in appropriate bases.Let |Ψ_n⟩ be a computationally-universal resource state with n input qubits |ψ_ in(n)⟩≡(⊗_i=1^nU_ in^(i))|0^n⟩, where the single-qubit unitary operator U_ in^(i) is I or H for each 1≤ i≤ n. Our purpose is to transform |Ψ_n⟩ to a strictly-universal resource state that deterministically realizes any unitary operator composed of {H,S,CCZ} (up to a byproduct). To this end, we first expand the size from n to N=n+2 regardless of the number of the S gates to be applied. Then, we replace a single qubit in 𝒞_I of |Ψ_N⟩ with |+i⟩ as shown in Fig. <ref> <cit.>. By using this transformed resource state, the strictly-universal quantum computation is performed on the first n input qubits |ψ_ in(n)⟩ with the aid of the two ancillary qubits |ψ_ in(1)⟩|+i⟩.The MBQC on the transformed resource state proceeds as follows: * Initialize |ψ_ in(n)⟩|ψ_ in(1)⟩|+i⟩ to |0^n+1⟩|+i⟩. It can be accomplished without the added |+i⟩ because the given resource state is computationally universal, and |ψ_ in(n)⟩|ψ_ in(1)⟩ is a tensor product of |0⟩'s and/or |+⟩'s.* Run the quantum circuit in Fig. <ref> (a) on the (n+1)-th, n-th, and (n-1)-th input qubits |000⟩. As the result, (n+1)-th qubit becomes |1⟩, which will be used to implement the S gate in step 3 (for the proof, see Appendix A). This step can also be implemented without the added |+i⟩ because the given resource state is computationally universal.* Depending on which quantum gate we want to apply on the first n qubits, perform one of the following procedures: * When H or CCZ is applied, perform the corresponding measurements in the original computationally-universal MBQC.* When S is applied, run the quantum circuit in Fig. <ref> (b) by using (n+1)-th and (n+2)-th input qubits |1⟩|+i⟩ (for the proof, see Appendix A). Note that |1⟩|+i⟩ is invariable in step 3, and hence the S gate can be applied at any time. In other words, |1⟩|+i⟩ can be utilized recursively. * Finally, a desired output state is generated on the first n qubits in 𝒞_O (up to a byproduct).As an advantage of our strategy, the set of the required measurement bases does not change before and after our transformation. This is because we use only H and CCZ gates in the above algorithm.Toward improving parallelizability.—As discussed before, the S gates cannot be applied in parallel in our algorithm. Toward relaxing this weakness, we give a quantum circuit of duplicating |+i⟩ in Fig. <ref>. Its correctness can be shown as follows:CCZ(I⊗ I⊗ H)CCZ(I⊗ H⊗ H)(|10⟩⊗|+i⟩)= {I⊗[Λ(Z)(I⊗ H)Λ(Z)(H⊗ H)]}(|10⟩⊗|+i⟩)= [I⊗(Λ(Z)Λ(X))](|1⟩⊗|+⟩⊗|+i⟩)= (I⊗Λ(iY))(|1⟩⊗|+⟩⊗|+i⟩)= |1⟩⊗|+i⟩^⊗ 2,where Y≡ i|1⟩⟨ 0|-i|0⟩⟨ 1| is the Pauli-Y gate.Application to MBQC with hypergraph states.—Hypergraph states are generalizations of graph states <cit.>. Let G≡(V,E_2,E_3) be a triplet of the set V of m vertices, the set E_2 of edges connecting two vertices, and the set E_3 of hyperedges connecting three vertices. Note that in general, hyperedges connecting more than three vertices are also allowed, but they are unnecessary in this section. An m-qubit hypergraph state |G⟩ corresponding to the hypergraph G is defined as(∏_(j,k,l)∈ E_3CCZ_jkl)(∏_(j,k)∈ E_2Λ(Z)_jk)|+⟩^⊗ m,where the subscript represents which qubits the quantum gate is applied to.The hypergraph state in Ref. <cit.> was shown to be computationally universal and is prepared by applying CZ gates on Θ(n^4d) |+⟩'s and Θ(n^3d) small hypergraph states in Fig. <ref> (a), where n and d are the number of input qubits and the depth under the gate set {H, CCZ}, respectively. From our argument in the previous sections, the hypergraph state in Ref. <cit.> can be made strictly universal by replacing an input qubit with a single |+i⟩. Such replacement can be accomplished by modifying one of the small hypergraph states such as Fig. <ref> (b). By measuring the added qubit in the Pauli-Y basis, the third input state becomes |+i⟩ (up to a byproduct), due to the gate teleportation. Since the original hypergraph state in Ref. <cit.> requires only Pauli-X and Z basis measurements for the computational universality, the transformed hypergraph state achieves the strict universality by using a single Pauli-Y basis measurement in addition to those Pauli measurements.Conclusion and discussion.—We have proposed the transformation from computationally-universal MBQC to strictly-universal one by simply replacing a single input qubit with a catalyst |+i⟩. We hope that our result facilitates the discovery of novel strictly-universal resource states. By applying our transformation to the hypergraph state in Ref. <cit.>, we have constructed a strictly-universal hypergraph state. From Fig. <ref>, our result should imply that the gap between the computational and strict universalities is tiny than expected so far. In fact, our constructed strictly-universal hypergraph state has the same amount of magic (i.e., nonstabilizerness) with the computationally-universal hypergraph state in Ref. <cit.> when we quantify it by using the stabilizer rank <cit.>. It would be interesting to widely explore the gap between computationally-universal and strictly-universal MBQC with Pauli measurements from the viewpoint of magic (see, e.g., Ref. <cit.>).In Ref. <cit.>, the strict universality of weighted graph states with Pauli-X and Z basis measurements was shown. On the other hand, our hypergraph state also requires a Pauli-Y measurement to achieve the strict universality. In this sense, although our result reduces the gap between them, weighted graph states are still slightly superior to hypergraph states. However, with respect to the verifiability (i.e., how easily the fidelity between the ideal state and an actually realized state can be estimated), hypergraph states are conversely superior to weighted graph states. Although only Pauli-X and Z basis measurements are sufficient for hypergraph states <cit.>, non-Pauli measurements are required for weighted graph states <cit.>. It would be interesting to investigate differences between them, which are two different generalizations of graph states, more deeply.As a related work, the deformation from the star-lattice AKLT state, which is not expected to be universal, to a universal graph state was investigated in Ref. <cit.> to observe a phase transition in computational power. Our work may also be able to be used to observe the computational phase transition between two types of universality by gradually deforming an input qubit of a computationally-universal resource state from the original state to |+i⟩. As another future work, it should be interesting to generalize our result to other non-strictly-universal gate sets.We thank Seiichiro Tani and Seiseki Akibue for helpful discussions. YT is supported by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0120319794, JST [Moonshot R&D – MILLENNIA Program] Grant Number JPMJMS2061, and the Grant-in-Aid for Scientific Research (A) No.JP22H00522 of JSPS.99 S97P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Comput. 26, 1484 (1997). WZ06P. Wocjan and S. Zhang, Several natural BQP-Complete problems, arXiv:quant-ph/0606179. S06D. Shepherd, Computation with Unitaries and One Pure Qubit, arXiv:quant-ph/0608132. NC10M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information 10th Anniversary Edition (Cambridge University Press, Cambridge, 2010). RB01R. Raussendorf and H. J. Briegel, A One-Way Quantum Computer, Phys. Rev. Lett. 86, 5188 (2001). RBB03R. Raussendorf, D. E. Browne, and H. J. Briegel, Measurement-based quantum computation on cluster states, Phys. Rev. A 68, 022312 (2003). ADKLLR07D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev, Adiabatic Quantum Computation is Equivalent to Standard Quantum Computation, SIAM J. Comput. 37, 166 (2007). A03D. Aharonov, A Simple Proof that Toffoli and Hadamard are Quantum Universal, arXiv:quant-ph/0301040. S02Y. Shi, Both Toffoli and Controlled-NOT need little help to do universal quantum computation, arXiv:quant-ph/0205115. FN1There exist infinitely many computationally-universal gate sets other than the commonly-used gate set {H,CCZ}. In this paper, we focus on computationally-universal gate sets that can exactly implement H and CCZ, and our result can be applied to any such gate set, e.g., {√(H), CCZ} and {H, CCZ}∪ R with R being any set of quantum gates whose elements are real numbers in the Pauli-Z basis. BR01H. J. Briegel and R. Raussendorf, Persistent Entanglement in Arrays of Interacting Particles, Phys. Rev. Lett. 86, 910 (2001). WAR11T.-C. Wei, I. Affleck, and R. Raussendorf, Affleck-Kennedy-Lieb-Tasaki State on a Honeycomb Lattice is a Universal Quantum Computational Resource, Phys. Rev. Lett. 106, 070501 (2011). KW19A. Kissinger and J. van de Wetering, Universal MBQC with generalised parity-phase interactions and Pauli measurements, Quantum 3, 134 (2019). DHHLB05W. Dür, L. Hartmann, M. Hein, M. Lewenstein, and H.-J. Briegel, Entanglement in Spin Chains and Lattices with Long-Range Ising-Type Interactions, Phys. Rev. Lett. 94, 097203 (2005). FN3Weighted graph states are generalizations of graph states. They are generated by applying ∏_jΛ(R_z(θ_j)) to |+⟩'s, where R_z(θ)≡|0⟩⟨ 0|+e^iθ|1⟩⟨ 1| is the Z rotation gate for any θ∈ℝ. When each of {θ_j}_j is equal to π, weighted graph states become ordinary graph states. MM16J. Miller and A. Miyake, Hierarchy of universal entanglement in 2D measurement-based quantum computation, npj Quantum Inf. 2, 16036 (2016). MM18J. Miller and A. Miyake, Latent Computational Complexity of Symmetry-Protected Topological Order with Fractional Symmetry, Phys. Rev. Lett. 120, 170503 (2018). GGM19M. Gachechiladze, O. Gühne, and A. Miyake, Changing the circuit-depth complexity of measurement-based quantum computation with hypergraph states, Phys. Rev. A 99, 052304 (2019). TMH19Y. Takeuchi, T. Morimae, and M. Hayashi, Quantum computational universality of hypergraph states with Pauli-X and Z basis measurements, Sci. Rep. 9, 13585 (2019). YFTTK20H. Yamasaki, K. Fukui, Y. Takeuchi, S. Tani, and M. Koashi, Polylog-overhead highly fault-tolerant measurement-based quantum computation: all-Gaussian implementation with Gottesman-Kitaev-Preskill code, arXiv:2006.05416. GF19C. Gidney and A. G. Fowler, Efficient magic state factories with a catalyzed |CCZ⟩ to 2|T⟩ transformation, Quantum 3, 135 (2019). BK05S. Bravyi and A. Kitaev, Universal quantum computation with ideal Clifford gates and noisy ancillas, Phys. Rev. A 71, 022316 (2005). JP99D. Jonathan and M. B. Plenio, Entanglement-Assisted Local Manipulation of Pure Quantum States, Phys. Rev. Lett. 83, 3566 (1999). C11E. T. Campbell, Catalysis and activation of magic states in fault-tolerant architectures, Phys. Rev. A 83, 032317 (2011). K97A. Y. Kitaev, Quantum computations: algorithms and error correction, Russ. Math. Surv. 52, 1191 (1997). FN2In the case of MBQC on the correlation space <cit.>, the input state is defined as an edge state in the correlation space. Therefore, our transformation becomes the replacement of the edge state. GE07D. Gross and J. Eisert, Novel Schemes for Measurement-Based Quantum Computation, Phys. Rev. Lett. 98, 220503 (2007). RHBM13M. Rossi, M. Huber, D. Bruß, and C. Macchiavello, Quantum hypergraph states, New J. Phys. 15, 113022 (2013). BSS16S. Bravyi, G. Smith, and J. A. Smolin, Trading Classical and Quantum Computational Resources, Phys. Rev. X 6, 021043 (2016). LW22Z.-W. Liu and A. Winter, Many-Body Quantum Magic, PRX Quantum 3, 020333 (2022). TM18Y. Takeuchi and T. Morimae, Verification of Many-Qubit States, Phys. Rev. X 8, 021060 (2018). ZH19H. Zhu and M. Hayashi, Efficient Verification of Hypergraph States, Phys. Rev. Applied 12, 054047 (2019). HT19M. Hayashi and Y. Takeuchi, Verifying commuting quantum computations via fidelity estimation of weighted graph states, New J. Phys. 21, 093060 (2019). DB14A. S. Darmawan and S. D. Bartlett, Graph states as ground states of two-body frustration-free Hamiltonians, New J. Phys. 16, 073013 (2014). § APPENDIX A: PROOF OF FIG. <REF>Proof. We first show that the output state of the quantum circuit in Fig. <ref> (a) is |100⟩. From[(I⊗ I⊗ H)CCZ]^4=(CCXCCZ)^2 = [(I^⊗ 2-|11⟩⟨ 11|)⊗ I+|11⟩⟨ 11|⊗ (-iY)]^2= (I^⊗ 2-|11⟩⟨ 11|)⊗ I+|11⟩⟨ 11|⊗ (-I)= Λ(Z)⊗ I,where CCX≡(I⊗ I⊗ H)CCZ(I⊗ I⊗ H) is the Toffoli gate, and Y≡ i|1⟩⟨ 0|-i|0⟩⟨ 1| is the Pauli-Y gate, the quantum circuit in the blue box can be considered as the controlled-Z (CZ) gate. Therefore, the output state of the quantum circuit in Fig. <ref> (a) is{(H⊗ H)[Λ(Z)(I⊗ H)]^2Λ(Z)(H⊗ H)}|00⟩⊗|0⟩ = [(H⊗ I)Λ(X)Λ(Z)Λ(X)(H⊗ I)]|00⟩⊗|0⟩= [(H⊗ I)Λ(-Z)(H⊗ I)]|00⟩⊗ |0⟩= [(H⊗ I)(Z⊗ I)(H⊗ I)]|00⟩⊗ |0⟩=|100⟩. Next, we show the Fig. <ref> (b). Since the first input qubit is |1⟩, all the CCZ gates become the CZ gates on the second and third qubits, and hence the first output qubit is trivially |1⟩. The unitary operation in the quantum circuit can be divided into the former and latter parts. The latter part enclosed by the dotted red rectangle can be treated as[(I⊗ H)Λ(Z)(I⊗ H)][(H⊗ I)Λ(Z)(H⊗ I)][(I⊗ H)Λ(Z)(I⊗ H)]=Λ(X)Λ̃(X)Λ(X)=SWAPon the second and third qubits, where Λ̃(X)=I⊗|0⟩⟨ 0|+X⊗|1⟩⟨ 1| is the controlled-X gate with swapped control and target qubits, and SWAP≡∑_i,j∈{0,1}|ji⟩⟨ ij| is the SWAP gate. Therefore, the remaining task is to show that the output state of the former part is S|ψ⟩⊗|+i⟩ for any single-qubit state |ψ⟩=α|0⟩+β|1⟩ with complex values α and β satisfying |α|^2+|β|^2=1. It is shown by calculating as follows:[Λ(Z)(H⊗ I)Λ(Z)(H⊗ H)Λ(Z)(I⊗ H)]|+i⟩|ψ⟩ = (Λ(Z)Λ̃(X)Λ(X))|+i⟩|ψ⟩= (Λ(Z)Λ̃(X))|0⟩(α|0⟩+β|1⟩)+i|1⟩(α|1⟩+β|0⟩)√(2)= Λ(Z)(α|0⟩+iβ|1⟩)|0⟩+i(α|0⟩-iβ|1⟩)|1⟩√(2)= S|ψ⟩⊗|+i⟩.▪ | http://arxiv.org/abs/2312.16433v1 | {
"authors": [
"Yuki Takeuchi"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231227064217",
"title": "Catalytic Transformation from Computationally-Universal to Strictly-Universal Measurement-Based Quantum Computation"
} |
VirtualPainting: Addressing Sparsity with Virtual Points and Distance-Aware Data Augmentation for 3D Object Detection Sudip DhakalUniversity of North TexasDenton, Texas, USADominic Carrillo, Deyuan Qu, Michael Nutt, Qing Yang, Song FuUniversity of North TexasDenton, Texas, USAJanuary 14, 2024 ======================================================================================================================================================================================In recent times, there has been a notable surge in multimodal approaches that decorates raw LiDAR point clouds with camera-derived features to improve object detection performance. However, we found that these methods still grapple with the inherent sparsity of LiDAR point cloud data, primarily because fewer points are enriched with camera-derived features for sparsely distributed objects. We present an innovative approach that involves the generation of virtual LiDAR points using camera images and enhancing these virtual points with semantic labels obtained from image-based segmentation networks to tackle this issue and facilitate the detection of sparsely distributed objects, particularly those that are occluded or distant. Furthermore, we integrate a distance aware data augmentation (DADA) technique to enhance the model’s capability to recognize these sparsely distributed objects by generating specialized training samples. Our approach offers a versatile solution that can be seamlessly integrated into various 3D frameworks and 2D semantic segmentation methods, resulting in significantly improved overall detection accuracy. Evaluation on the KITTI and nuScenes datasets demonstrates substantial enhancements in both 3D and bird’s eye view (BEV) detection benchmarks.§ INTRODUCTION3D object detection plays a pivotal role in enhancing scene understanding for safe autonomous driving. In recent years, a large number of 3D object detection techniques have been implemented <cit.>. These algorithms primarily leverage information fromLiDAR and camera sensors to perceive their surroundings. LiDAR provides low-resolution shape and depth information <cit.>, while cameras capture dense images rich in color and textures. While multimodal based 3D object detection has made significant advancements recently, their performance still notably deteriorates when dealing with the sparse point cloud data. Recently, painting-based methods like PointPainting <cit.>have gained popularity in an attempt to address this issue. These methods decorate the LiDAR points with camera features. However, a fundamental challenge still persists in the case of objects lacking corresponding point clouds, such as distant and occluded objects. Despite the presence of camera features for such objects, there is no associated point cloud to complement these features. As a result, although they are beneficial for improving overall 3D detection performance, they still contend with sparse point clouds, as illustrated in Figure 1.§.§ Limitations of Prior ArtIn response to the inherent sparsity of LiDAR data, several methods have been introduced to generate pseudo or virtual points. These methods bolster the sparse point clouds by introducing supplementary points around the existing LiDAR points. For example, MVP <cit.> generates virtual points by completing the depth information of 2D instance points using data from the nearest 3D points. Similarly, SFD <cit.> creates virtual points based on depth completion networks <cit.>. These virtual points play a crucial role in enhancing the geometric representation of distant objects, showcasing their potential to significantly enhance the performance of 3D detection methods. However, current implementations have yet to fully harness the advantages of integrating semantic results from semantic networks with virtual points. The incorporation of semantic information into the augmented point cloud, which includes both the original and virtual points, not only enriches the dataset but also increases the model's overall robustness.Most recent fusion-based methods primarily concentrate on different fusion stages: early fusion, which involves combining LiDAR and camera data at an early stage; deep fusion, where features from both camera and LiDAR are combined by feature fusion; and late fusion, which combines the output candidates or results from both LiDAR and camera detection frameworks at a later stage. However, there is minimal emphasis on the quality of the training data, a crucial aspect in any detection framework. This issue is particularly evident in many fusion-based methods that lack sufficient sparse training samples for sparsely distributed object such as occluded and distant objects. The absence of comprehensive training data makes the trained model fragile and, as a result, incapable of effectively detecting distant objects during the testing phase. Consequently, fusion-based methods also face the challenge of inadequate data augmentation. The inherent disparities between 2D image data and 3D LiDAR data make it difficult to adapt several data augmentation techniques that are effective for the latter. This limitation poses a significant barrier and is a primary factor leading to the generally lower performance of multi-modal methods in comparison to their single-modal counterparts. §.§ Proposed SolutionTo address these issues, we proposea simple yet effective “VirtualPainting” framework. Our method addresses the problem of sparse LiDAR points by generating virtual points using depth completion networksPENET <cit.>. To elaborate, we initiate the generation of supplementary virtual points and seamlessly merge them with the original points, resulting in an augmented LiDAR point cloud dataset. This augmented dataset subsequently undergoes a “painting” process utilizing features derived from cameras. The camera-derived features are in the form of semantic scores or per-pixel class scores. The augmented LiDAR point cloud is concatenated with per pixel class score to obtain a feature-rich point cloud. The result is twofold: it not only yields a denser point cloud in the form of augmented LiDAR point clouds but also enables a seamless combination of camera features and the point clouds. The virtual points, generated via the depth completion network, are now linked with camera features, resulting in a more comprehensive data representation. In certain scenarios where camera features were present but lacked corresponding lidar sensor points, these camera features remained unincorporated. Additionally, we address the challenges of insufficient training samples for distant objects and the absence of adequate data augmentation techniques by integrating a method called Distance Aware Data Augmentation (DADA). In this approach, we intentionally generate sparse training samples from objects that are initially densely observed by applying a distant offset. Considering that real-world scenes frequently involve incomplete data due to occlusion, we also introduce randomness by selectively removing portions to simulate such occlusion. By integrating these training samples, our model becomes more resilient during the testing phase, especially in the context of detecting sparsely distributed objects, such as occluded or distant objects, which are frequently overlooked in many scenarios.In brief, our contributions are summarized as follows. * An innovative approach that augments virtual points obtained through the joint application of LiDAR and Camera data with semantic labels derived from image-based semantic segmentation. * Integration of distance-aware data augmentation method for improving models’ ability to identify sparse and occluded objects by creating training samples. * A generic method that can incorporate any variation of 3D frameworks and 2D semantic segmentation for improving the overall detection accuracy. * Evaluation on the KITTI and nuScenes dataset shows major improvement in both 3D and BEV detection benchmarks especially for distant objects.§ RELATED WORK§.§ Single ModalityExisting LiDAR-based methods can be categorized into four main groups based on their data representation: point-based, grid-based, point-voxel-based, and range-based. PointNet <cit.> and PointNet++ <cit.> are early pioneering works that apply neural networks directly to point clouds. PointRCNN <cit.>introduced a novel approach, directly generating 3D proposals from raw point clouds. VoxelNet <cit.> introduced the concept of a VFE (voxel feature encoding) layer to learn unified feature representations for 3D voxels. Building upon VoxelNet, CenterPoint <cit.> devised an anchor-free method using a center-based framework based on CenterNet <cit.>, achieving state-of-the-art performance. SECOND <cit.> harnessed sparse convolution <cit.> to alleviate the challenges of 3D convolution. PV-RCNN <cit.> bridged the gap between voxel-based and point-based methods to extract more discriminative features. Voxel-RCNN <cit.> emphasized that precise positioning of raw points might not be necessary, contributing to the efficiency of 3D object detection.PointPillars <cit.> innovatively extracted features from vertical columns (Pillars) using PointNet <cit.>. Despite these advancements, all these approaches share a common challenge—the inherent sparsity of LiDAR point cloud data, which impacts their overall efficiency. §.§ Fusion-BasedThe inherent sparsity of LiDAR point cloud data has sparked research interest in multi-modal fusion-based methods. MV3D <cit.> and AVOD <cit.> create a multi-channel Bird's Eye View (BEV) image by projecting the raw point cloud into BEV. AVOD <cit.> takes both LiDAR point clouds and RGB images as input to generate features shared by the Region Proposal Network (RPN) and the refined network. MMF <cit.> benefits from multi-task learning and multi-sensor fusion. 3D-CVF [2] fuses features from multi-view images, while Sniffer Faster RCNN <cit.> combines and refines the 2D and 3D proposal together at the final stage of detection. CLOCs <cit.> and Sinffer Faster R-CNN++ <cit.> takes it one step further and refines the confidences of 3D candidates using 2D candidates in a learnable manner. These methods face limitations in utilizing image information due to the sparse correspondences between images and point clouds. Additionally, fusion-based methods encounter another challenge – a lack of sufficient data augmentation. In this paper, we address both issues by capitalizing on the virtual points and using data augmentation techniques.§.§ Point Decoration FusionRecent developments in fusion-based methods have paved the way for innovative approaches like point decoration fusion-based methods, which enhance LiDAR points with camera features. PointPainting <cit.>, for instance, suggests augmenting each LiDAR point with the semantic scores derived from camera images. PointAugmenting <cit.> acknowledges the limitations of semantic scores and proposes enhancing LiDAR points with deep features extracted from a 2D object detectors. FusionPainting <cit.> takes a step further by harnessing both 2D and 3D semantic networks to extract additional features from both camera images and the LiDAR point cloud. Nevertheless, it's important to note that these methods also grapple with the sparse nature of LiDAR point cloud data as illustrated in Figure 1. Even though PointAugmenting addresses this issue with data augmentation techniques, the inadequacy of sparse training samples for sparse objects leads to their failure in detection.§ VIRTUALPAINTINGWe describe the architecture of our “VirtualPaintinag” framework in Fig. 2. To tackle the issue of sparsity of LiDAR point clouds and inadequate sparse training samples for these sparse objects, we introduce a multi-modality detector that enriches the original point cloud data through a series of different data enhancement processes.. As shown in Fig 2, the proposed framework consists of: (1) semantic segmentation module: an image based sem. seg. module that computes the pixel wise segmentation score. (2) Depth Completion Network: an image based depth completion network “PENet ” that generates the LiDAR virtual point cloud. (3) VirtualPainting: virtual and original LiDAR points are painted with sem. seg. scores. (4) Distance aware Data Augmentation(DADA): a distance aware sampling strategy that creates sparse training samples from nearby dense object (4) 3D Detection Network: a LiDAR based 3D object detection network. §.§ Image Based Semantic Network2D images captured by cameras are rich in texture, shape, and color information. This richness offers valuable complementary information for point clouds, ultimately enhancing 3D detection. To leverage this synergy, we employ a semantic segmentation network to generate pixel-wise semantic labels. We employ BiSeNet2 segmentation model for this purpose. This network takes multi-view images as its input and delivers pixel-wise classification labels for both foreground instances and the background. It's worth noting that our architecture is flexible, allowing for the incorporation of various semantic segmentation networks to generate semantic labels. §.§ PENet for Virtual Point GenerationThe geometry of nearby objects in LiDAR scans is often relatively complete, whereas for distant objects, it's quite the opposite. Additionally, there's a challenge of insufficient data augmentation due to the inherent disparities between 2D image data and 3D LiDAR data. Several data augmentation techniques that perform well with 3D LiDAR data are difficult to apply in multi-modal approaches. This obstacle significantly contributes to the usual under performance of multi-modal methods when compared to their single-modal counterparts. To address these issues, we employ PENet to transform 2D images into 3D virtual point clouds. This transformation unifies the representations of images and raw point clouds, allowing us to handle images much like raw point cloud data. We align the virtual points generated from the depth completion network with the original points to create a augmented point cloud data. This approach collectively enhances the geometric information of sparse objects while also establishing an environment for a unified representation of both images and point clouds. §.§ Painting Virtual PointsThe current implementation of LiDAR point cloud painting methods has not harnessed the advantages of associating semantic results from the semantic network with virtual points. Incorporating semantic information into the augmented point cloud, which includes both the original and virtual points, not only enriches the dataset but also enhances the model's robustness. Let's refer the original points generated from LiDAR scan as "raw point cloud" deonted as R and the points clouds generated from depth completion network as "virtual points" denoted by V. Starting with a set of raw clouds R, we have the capability to transform it into a sparse depth map S using a known projection T_LiDAR→image. We also have an associated image denoted as I corresponding to R. By providing both I and S as inputs to a depth completion network, we obtain a densely populated depth map labeled as D. Utilizing a known projection T_image→LiDAR, we can then generate a set of virtual points denoted as V. Our VirtualPainting algorithm comprises three primary stages. In the initial stage, utilizing the virtual points acquired from the depth completion network, we align these virtual points with the original raw LiDAR points, effectively generating an augmented LiDAR point cloud denoted as A with N points. In the second stage, as previously mentioned, the segmentation network produces C-class scores. In the case of KITTI <cit.>, C equals 4 (representing car, pedestrian, cyclist, and background), while for nuScenes <cit.>, C is 11 (comprising 10 detection classes along with background). In the final stage, the augmented LiDAR points undergo projection onto the image, and the segmentation scores corresponding to the relevant pixel coordinates (h, w) are appended to the augmented LiDAR point, resulting in the creation of a painted LiDAR point. This transformation process involves a homogeneous transformation followed by projection into the image as given in Algorithm 1.§.§ Distance Aware Data AugmentationAs mentioned earlier, the absence of comprehensive geometric information for sparse objects can significantly hamper detection performance. To overcome this challenge, we aim to enhance our understanding of the geometry of sparsely observed distant objects by generating training samples derived from densely observed nearby objects. While several established methods exist to address this challenge, such as random sampling or farthest point sampling, it's important to note that these techniques often result in an uneven distribution pattern within the LiDAR-scanned point cloud. In this context, we adopt a sampling strategy <cit.> that takes into account both LiDAR scanning mechanics and scene occlusion. Within the context of a nearby ground truth box with position C_g and its associated inside points {P_gi}_i, we introduce a random distance offset Δα as follows: C_g := C_g + Δα, P_gi := P_gi + Δα. Subsequently, we proceed to convert the points {P_gi}_i into a spherical coordinate system and voxelize them into spherical voxels, aligning with the LiDAR's angular resolution. Within each voxel, we compute the distances between the points. If the points are found to be in very close proximity, with their distance being almost negligible, falling below a predefined threshold λ, we choose to calculate the average of these points. This results in a set of sampled points that closely mimic the distribution pattern of real scanned points, as depicted in Figure 2. During the training process, similar to the GT-AUG (Shi, Wang, and Li 2019) approach, we incorporate these sampled points and bounding box information into the training samples to facilitate data augmentation. This augmentation technique has the potential to address the shortage of training samples for distant objects. Additionally, we randomly remove portions of dense LiDAR points to simulate occlusion, aiming to potentially resolve the scarcity of occluded samples during training. § 3D DETECTORIn the last phase of our architecture, the 3D detector receives the input in the form of the painted version of the augmented point cloud. As there are no alterations to the backbones or other architectural components, providing the painted point cloud as input to any 3D detector such as PointRCNN, VoxelNet, PVRCNN, PointPillars and so on, is very straightforward to obtain the final detection results. § EXPERIMENTAL SETUP AND RESULTSWe evaluate the effectiveness of our proposed "VirtualPainting" on both the large-scale autonomous driving 3D object detection KITTI and nuScences datasets. Initially, we provide an overview of our experimental setup, followed by presenting the evaluation results on both test and validation sets of each dataset.§.§ Semantic Network DetailsFor semantic segmentation in our KITTI dataset experiments, we use the BiSeNetv2 <cit.> network. This network underwent an initial pretraining phase on the CityScapes <cit.> dataset and was subsequently fine-tuned on the KITTI dataset using PyTorch <cit.>. For the simplicity of our implementation, we chose to ignore classes cyclist and bike because there is a difference in the class defination of a cyclist between KITTI semantic segmentation and object detection tasks. In object detection, a cyclist is defined as a combination of the rider and the bike, whereas in semantic segmentation, a cyclist is defined as solely the rider, with the bike being a separate class. Similiary for nuScenes, we developed a custom network using the nuImages dataset, which comprises 100,000 images containing 2D bounding boxes and segmentation labels for all nuScenes classes. The segmentation network utilized a ResNet backbone to extract features at various strides, ranging from 8 to 64, and incorporated an FCN segmentation head for predicting nuScenes segmentation scores.§.§ PENetMuch like SFD, our approach relies on the virtual points produced by PENet. ThePENet architecture is initially trained exclusively on the KITTI dataset, which encompasses both color images and aligned sparse depth maps. These depth maps are created by projecting 3D LiDAR points onto their corresponding image frames. The dataset comprises 86,000 frames designated for training, in addition to 7,000 frames allocated for validation, and a further 1,000 frames designated for testing purposes. Our PENEt is trained on the training set.The images are standardized at a resolution of 1216 × 352. Typically, a sparse depth map contains approximately 5 valid pixels, while ground truth dense depth maps have an approximate 16 coverage of valid pixels <cit.>. §.§ LiDAR Network DetailsWe use the OPENPCDet <cit.> tool for the KITTI dataset, incorporating 3D detectors such as PVRCNN, PointPillars, PointRCNN, and VoxelNEt, with only minimal adjustments to the point cloud dimension. In order to accommodate segmentation scores from a semantic network, we expand the dimension by adding the total number of classes used in the segmentation network. Since our architecture remains unchanged beyond this point, it remains generic and can be readily applied to any 3D detector without requiring complex configuration modifications. Similarly, for the nuScenes dataset, we utilize an enhanced version of PointPillars, as detailed in <cit.>. These enhancements involve alterations to the pillar resolution, network architecture, and data augmentation techniques. Firstly, we reduce the pillar resolution from 0.25 meters to 0.2 meters to improve the localization of small objects. Secondly, we revise the network architecture to incorporate additional layers earlier in the network. Lastly, we adjust the global yaw augmentation from π to π/6 <cit.>.§.§ KITTI ResultsWe initially assess our model's performance using the KITTI dataset and contrast it with the current state-of-the-art methods. In Table 1, you can observe the outcomes for the KITTI test BEV detection benchmark. Notably, there is a substantial enhancement in mean average precision (mAP) compared to both single and multi-modality baseline methods like AVOD, MV3D, PointRCNN, PointPillars, and PVRCNN. This improvement is particularly prominent in the pedestrian and cyclist categories, which are more challenging to detect, especially when it comes to sparse objects. Notably, the moderate and hard classes within the pedestrian and cyclist categories exhibit more significant improvements when compared to the easy class, as depicted in Table 1. We can clearly observe that for PointRCNN, our approach exhibits a notable enhancement of +4.09, +3.79 and +4.52, +2.89, specifically for the moderate and hard difficulty levels within the pedestrian and cyclist classes. This trend is consistent across various other models as well. Similarly, Table 2 provides a comparison of our methods with point-painting based approaches. Although there isn't a remarkable increase in mean average precision, there is still some notable enhancement.§.§ nuScenes ResultsAdditionally, we assess the performance of our model using the nuScenes dataset, as displayed in Table 3. Our approach surpasses the single modality PointPillar-based model in all categories. Furthermore, the enhanced variant called PaintedPointPillars, which is based on the point-painting method for PointPillars, exhibits improvements in terms of nuScenes Detection Score (NDS) and Average Precision (AP) across all ten classes in the nuScenes dataset. Similarly, as seen in our previous results, this improvement is most noticeable in classes such as Pedestrian, Bicycle, and Motorcycle, where the likelihood of remaining undetected in occluded and distant sparse regions is higher.§ ABLATION STUDIES §.§ VirtualPainting is a generic and flexible methodAs depicted in Table 1 and Table 2, we assess the genericity of our approach by integrating it with established 3D detection frameworks. We carry out three sets of comparisons, each involving a single-modal method and its multi-modal counterpart. The three LiDAR-only models under consideration are PointPillars, PointRCNN, and PVRCNN. As illustrated in Table 2, VirtualPainting consistently demonstrates enhancements across all single-modal detection baselines. These findings suggest that VirtualPainting possesses a general applicability and could potentially be extended to other 3D object detection frameworks. Likewise, Table X demonstrates that several elements, including virtual points and semantic segmentation, can be seamlessly incorporated into the architecture without requiring intricate modifications, thus highlighting the flexibility of our approach. Similary, we also evaluated the inference speed of our method on an NVIDIA RTX 3090 GPU. While the inference speed is higher compared to the original PointPainting method, it still remains faster than other single-modality based methods, as indicated in Table 4.§.§ Where does the improvement come from?To gain a comprehensive understanding of how VirtualPainting leverages camera cues and augmentation techniques applied to LiDAR points to enhance 3D object detection models, we offer a thorough analysis encompassing both qualitative and quantitative aspects. Initially, we categorize objects into three groups based on their proximity to the ego-car: those within 30 meters, those falling within the 30 to 50-meter range, and those located beyond 50 meters.Figure 4 and Figure 5 illustrates the relative improvements achieved through multi-modal fusion within each of these distance groups. In essence, VirtualPainting consistently enhances accuracy across all distance ranges. Notably, it delivers significantly higher accuracy gains for long-range objects compared to short-range ones (e.g). This phenomenon may be attributed to the fact that long-range objects often exhibit sparse LiDAR point coverage, and the combination of augmentation techniques applied to these sparse points, rendering them denser, along with the inclusion of high-resolution camera semantic labels, effectively bridges the information gap. Likewise, as illustrated in Table 5 and Table 6, the incorporation of semantic segmentation significantly elevates precision compared to other elements, including the virtual point cloud and DADA. Although there is noticeable improvement when both virtual points and DADA are integrated, the primary contribution stems from the attachment of semantic labels to the augmented point cloud, which contains both virtual points and the original LiDAR points for both KITTI and nuScenes datasets.§ CONCLUSIONIn this paper, we propose a generic 3D object detector that combines both LiDAR point cloud and camera image to improve the detection accuracy, specially for sparsely distributed objects. We address thesparsity and inadequate data augmentation problems of LiDAR point cloud through the combined application of camera and LiDAR data. Using depth completion network we generate virtual point cloud to make the LiDAR points cloud dense and adopt the point decoration mechanism to decorate the augmented LiDAR points cloud with image semantics thus improving the detection for those objects that generally goes undetected due to sparse LiDAR points. Besides, we design a distance aware data augmentation techniques to make the model robust for occluded and distant object. Experimental results demonstrate that our approach can significantly improve detection accuracy, particularly for these specific objects.ieee_fullname | http://arxiv.org/abs/2312.16141v1 | {
"authors": [
"Sudip Dhakal",
"Dominic Carrillo",
"Deyuan Qu",
"Michael Nutt",
"Qing Yang",
"Song Fu"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231226180305",
"title": "VirtualPainting: Addressing Sparsity with Virtual Points and Distance-Aware Data Augmentation for 3D Object Detection"
} |
On a class of fusion 2-category symmetry: condensation completion of braided fusion category [ January 14, 2024 ============================================================================================ The task of music-driven dance generation involves creating coherent dance movements that correspond to the given music. While existing methods can produce physically plausible dances, they often struggle to generalize to out-of-set data. The challenge arises from three aspects: 1) the high diversity of dance movements and significant differences in the distribution of music modalities, which make it difficult to generate music-aligned dance movements. 2) the lack of a large-scale music-dance dataset, which hinders the generation of generalized dance movements from music. 3) The protracted nature of dance movements poses a challenge to the maintenance of a consistent dance style. In this work, we introduce the EnchantDance framework, a state-of-the-art method for dance generation. Due to the redundancy of the original dance sequence along the time axis, EnchantDance first constructs a strong dance latent space and then trains a dance diffusion model on the dance latent space. To address the data gap, we construct a large-scale music-dance dataset, ChoreoSpectrum3D Dataset, which includes four dance genres and has a total duration of 70.32 hours, making it the largest reported music-dance dataset to date. To enhance consistency between music genre and dance style, we pre-train a music genre prediction network using transfer learning and incorporate music genre as extra conditional information in the training of the dance diffusion model. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art performance on dance quality, diversity, and consistency. § INTRODUCTIONMusic and dance, as expressive art forms, have captivated human emotions throughout history and find wide application in movies, games, and various industries in modern society <cit.>. The creation of high-quality 3D dance animation, however, remains a costly endeavor, demanding the involvement of experienced dancers and expensive motion-capture equipment. Moreover, the utilization of computational methods for automatic dance generation can alleviate the burdensome process of manual creation. Such methods <cit.> hold the potential to aid animators in the creation of novel dance sequences from music. Despite these advancements, several challenges remain in the music-driven dance motion generation:(1) Weak correlation between music and dance:the diversity of dance movements, which shapes the high-dimensional distribution of body movements, and the rich genre and beat information of music result in a multiple mapping problem. To ensure the generalization of the model, a large-scale training dataset is often required. (2) Scarcity of large-scale data: compared to the text-to-motion dataset <cit.>, which can span dozens of hours, the music-to-dance field lacks sufficient data volume to generate high-fidelity and well-generalized dance movements.(3) Fragile consistency of dance genres: dance styles are characterized by distinct movement patterns, with specific constraints on the suitability and relevance of dance poses to a given genre and musical accompaniment. The absence of genre-specific information may result in the incorporation of multiple dance styles within a single piece, thereby diminishing the expressiveness and coherence of the performance (e.g., Latin dance motions in ballet music). However, dance sequences are often long-term, and it will become difficult to maintain the consistency of dance genres. Early methods <cit.> for music-driven dance generation typically employ motion graphs to splice dance segments and form complete dance movements. However, these methods are limited to creative dances. Autoregressive models <cit.> have also been used for music-driven generative dance, but often suffer from limb drift and poor generalization. The diffusion model exhibits robust vitality and is well-suited for the generation of data characterized by high diversity. Typically, EDGE <cit.> employs a Transformer-based diffusion model that adheres to the DDPM <cit.> in defining the diffusion process, thereby enabling direct prediction of the original dance sequence. The model exhibits the capability to generate physically plausible dances that are consistent with the input music.However, raw motion sequences exhibit redundancy along the time axis, and diffusion models operating on raw sequence data typically incur significant computational overhead during both training and inference stages <cit.>, rendering them inefficient. Moreover, raw motion data is susceptible to noise contamination, which may prompt strong diffusion models to learn cues for probability mapping from conditional inputs to noisy motion sequences, resulting in the production of artifacts.In this paper, we initially devise a variational autoencoder founded on the Transformer architecture to compress the dance motion sequence into the latent space, effectively eliminating redundancy in the original dance sequence. Subsequently, we propose a diffusion model predicated on dance latent space to learn a robust mapping from the music conditional distribution to the dance latent vector. This approach not only facilitates the generation of reasonable and diverse dance motion sequences but also eliminates redundant motion information and noise while mitigating the issue of slow sampling and poor generalization.Existing 3D music and dance datasets are generally limited in duration, which adversely impacts the performance of models when applied to out-of-set data. The widely used AIST++ dataset <cit.> has a duration of only 5.19 hours. In comparison, the HumanML3D dataset <cit.> for the text-to-motion field has a duration of 28.59 hours, highlighting the need for a large-scale, high-quality dance dataset. To address this need, we present the ChoreoSpectrum3D Dataset, which covers four coarse-grained genres of dance types, with a total duration of 70.32 hours.Current genre-oriented dance generation frameworks frequently employ manually appending genre tags during the inference stage. Given the strong correlation between dance styles and music genres, GTN-Bailando <cit.> introduces a Genre Token Network that infers genres from music to enhance the consistency of dance generation. However, constrained by the scarcity of large-scale music-dance data, the model continues to exhibit poor generalization capability. We design a simple yet effective music genre prediction network leveraging transfer learning. Specifically, we have incorporated a simple CNN block into the Audio Spectrogram Transformer (AST) pre-trained on the ImageNet and AudioSet <cit.> to facilitate music genre prediction. This approach enables music genre prediction with a minimal number of training parameters, obviating modifications to the entire model and enhancing generalization. Extensive experiments on the AIST++ and our ChoreoSpectrum3D datasets demonstrate the superiority of our method with respect to dance generation quality, diversity, and music-dance consistency. Concurrently, comprehensive out-of-set data evaluation has corroborated the generalization capability of our method and the efficacy of the dataset. The code, ChoreoSpectrum3D dataset, and demos can be found in the Supplementary Materials.§ BACKGROUND§.§ Dance Motion GenerationNumerous early studies <cit.> have employed motion retrieval paradigms to synthesize dance movements. This approach involves segmenting a given music piece and retrieving corresponding dance clips, which are then integrated to generate a complete dance sequence but tend to create unrealistic dances that lack creativity. Recently, generative models such as Generative Adversarial Networks (GANs) <cit.>, Variational Autoencoders (VAEs) <cit.>, and Diffusion models <cit.> have been successfully applied to various data modalities, including images <cit.>, text <cit.>, and audio <cit.>.Consequently, numerous studies have utilized these generative models to tackle the challenge of music-driven dance motion generation <cit.>. For instance, the Full-Attention Cross-Model Transformer (FACT) framework <cit.> has been adopted to generate dance movements from music. Similarly, Bailando <cit.> constructs a choreographic codebook to create a 3D dance movement library and employs the Transformer framework to match and combine these dance units with music. Despite their ability to generate smooth dances that are consistent with the music, these methods are limited by the autoregressive approach, resulting in limb drift and poor generalization to out-of-set data. EDGE <cit.> employs a diffusion model and incorporates Jukebox music features, outperforming the aforementioned methods with respect to music and dance consistency. However, the implementation of a diffusion model on raw dance sequences impedes generation quality and generalization capability due to the presence of redundant information and noise in the original dance sequences. We initially compress the raw action sequence using a variational autoencoder to eliminate redundant information, followed by the implementation of a diffusion model on the dance latent space. This approach significantly enhances the generalization capability of the model when applied to out-of-set data. Additionally, we have incorporated supplementary music genre information to ensure consistency between dance move styles and music genres. §.§ 3D Music-Dance DatasetAIST++ dataset <cit.> provides 3D music and dance data for 10 genres. However, there is a high degree of overlap between music and dance movements, with one piece of music corresponding to multiple dances. Additionally, the dataset only contains 5.19 hours of motion data. The GrooveNet dataset <cit.> utilizes motion capture equipment to obtain dance motion but is limited to electronic dance music and only lasts 23 minutes. The Dance with Melody dataset <cit.> contains four dance genres, but the motion duration is only 1.57 hours and the dance movements lack variety and have low matching with the music. The Music2Dance dataset <cit.> claims to contain two dance genres, modern dance and curtilage dance, with precise alignment with the music, but it only lasts for 1 hour and is not open source. The MMD dataset <cit.> contains 9.91 hours of dance motion sequences, but it is not fully available. However, compared to the tens of hours of data in the public datasets HumanML3D <cit.> and HumanLong3D <cit.> in the text-to-motion field, it still lacks scale and is insufficient for fully developing this task. As a result, we construct the ChoreoSpectrum3D Dataset with a duration of approximately 70.32 hours. Additionally, it includes precise SMPL pose and joint parameters for dance movements, as well as aligned music data.§ CHOREOSPECTRUM3D DATASETWhile existing works <cit.> have reported music-dance datasets, the availability of large-scale choreography datasets remains limited. Currently, the widely utilized dataset for music-driven dance generation tasks is the AIST+++ dataset <cit.>. This dataset adequately caters to the requirements of dance generation tasks. Nevertheless, the limited duration of the dataset poses a challenge as the distribution space of dance moves remains considerably large. This limitation hinders the ability of existing models to generalize effectively and generate high-quality dance moves. It emphasizes the need for additional data or augmentation. Hence, we construct a choreography-oriented large-scale dataset spanning a duration of 70.32 hours. §.§ Data Acquisition and AnnotationMotion capture equipment can provide clear and consistent dance movements, but it is often prohibitively expensive. As an alternative, we source dance videos of professional performers from the internet and estimate human body parameters from these videos.§.§.§ Diverse Dance GenresTo align our dance data more closely with the standards of dance artists, under the guidance of professional dancers, we classify dances into four genre-based categories: pop (Breaking, Locking, Hip-hop, Popping, Urban, and Jazz), ballet, Latin, and House dance.§.§.§ Detailed Motion AnnotationWe employ the 3D human motion estimation model <cit.> to obtain the SMPL motion parameters of characters in videos. However, we observe that dance videos often contain invalid motion frames. To address this issue, we design an automatic filtering method to eliminate implausible motion frames. In an effort to preserve the fluidity of the dance sequence, we retain only those continuous dance clips comprising 200 or more valid frames.Concurrently, to enhance the versatility of our dataset, we have also generated detailed annotations with reference to the HumanML3D format, widely employed in the text-to-motion domain. This primarily encompasses the acquisition of forward kinematic joint coordinates, extraction of kinematic vector representation, and computation of sample mean variance. Further details about the ChoreoSpectrum3D dataset can be found in the Supplementary Material.§.§.§ Detailed Music featuresWe not only address the Jukebox <cit.> music feature, which has previously exhibited robust performance on music-specific tasks <cit.> but also incorporate audio spectrum features extracted by the public audio processing toolbox Librosa <cit.>, including mel frequency cepstral coefficients (MFCC), MFCC delta, constant-Q chromagram, tempogram, and onset strength. §.§ Dataset Analysis We analyze datasets from multiple perspectives, including genres, joints, duration, and availability. A comparison between ChoreoSpectrum3D and existing 3D dance datasets is presented in Table <ref>. An example is shown in Figure <ref>.§ ENCHANTDANCE FRAMEWORKFormally, given a piece of music M. To ensure the alignment between the generated dance motions and the emotions conveyed in the music, we pre-train the Genre Prediction Network to obtain genre categories ĝ∈ G, where G is the set of pre-defined genre categories. Subsequently, we concatenate the genre category ĝ with the music features M∈ℝ^T × C,where T denotes that the frame number of the action is aligned with the music, while C represents the dimension of the feature. These concatenated features are then utilized as conditional information for the generation of dance movements X ∈ℝ^T × S, where S represents the dimension of the SMPL parameter. The architecture of the proposed network is depicted in Figure <ref> and comprises three primary modules: Context Embedding, DanceVAE, and Dance Diffusion. The Context Embedding module employs Jukebox <cit.> to extract music features. The DanceVAE extracts latent features of dance movements, followed by training the Dance Diffusion model in the latent space. §.§ Dance VAETo eliminate temporal redundant information in the original dance sequence, we develop the dance variational autoencoder (VAE) based on the Transformer architecture <cit.>, which comprises a Transformer encoder ℰ and a Transformer decoder 𝒟. The encoder constructs a low-dimensional, high-density latent space, while the decoder reconstructs the original dance sequence by upsampling the latent space.The input to the dance VAE is the SMPL parameter, and the output is the reconstructed SMPL parameter.Additionally, inspired by UNet networks, we establish a long skip connection between the two Transformer architectures to ensure temporal correlation of the motions. The training objective of the model is to minimize the reconstruction loss and Kullback-Leibler (KL) loss: L=min_ℰ,𝒟max_𝒩L_rec(x,𝒟(ℰ(x)))+L_KLL_rec=∑_n=1^N‖x-𝒟(ℰ(x))‖^2L_KL=log(𝒩(x))+log(1-𝒩(x̃)) The dance encoder embeds the frame-wise SMPL parameters x ∈ X as input into the latent space. The dance latent space is characterized by a Gaussian distribution with mean and variance parameters represented by μ and σ, respectively. The latent vector z is then sampled from the dance latent space using a reparameterization technique <cit.> as input to the dance decoder. The dance decoder employs a cross-attention mechanism to fuse dance feature information and reconstruct the dance pose x̂∈ℝ^L × S. The cross-attention module utilizes zero tokens of dimension L as the query matrix and the dance hidden vector as memory to ultimately reconstruct the dance sequence. §.§ Dance DiffusionOwing to the diversity and complexity of dances, the deployment of traditional generative networks is fraught. The mode collapse issue associated with GAN becomes pronounced, while the generative capability of flow models is constrained by bijections, rendering them ill-equipped to effectively handle high-diversity action generation tasks. The diffusion model, by virtue of its stochastic properties, is better suited for modeling dance movements characterized by high diversity distribution. The original dance sequence is replete with time-redundant information, and when used to construct a diffusion model directly, impedes the generation effect and speed of the model. Consequently, we implement the denoising process on the dance latent space.The diffusion model can be parametrically expressed as a Markov chain, with each step of the diffusion model equivalent to a transition process in the Markov chain. Our objective is to synthesize a coherent dance sequence x^1:L that is stylistically consistent with the musical signal c. In contrast to the UNet network architecture <cit.> previously employed in the image domain, we utilize a Transformer model to construct a dance diffusion generation framework that is better suited for continuous data such as audio sequences and dance motion sequences.The forward process of diffusion models is equivalent to gradually adding Gaussian noise to the original data distribution: q(z_t^1: L| z_t-1^1: L)=𝒩(√(1-β_t) z_t-1^1: L,β_t I) Where z_0^1: L is drawn from the latent distribution, 𝒩(·) denotes Gaussian distribution. The parameter β_t are constant hyper-parameters for sampling, which is utilized to control the degree of noise addition. When α_t is closer to 1, z_T^1: L can be approximated as a standard Gaussian distribution 𝒩(0, I). The reverse process of the diffusion model is equivalent to the gradual removal of noise from the Gaussian distribution. Each step is parameterized as follows: p(z_t-1^1: L| z_t^1: L)=𝒩(z_t-1^1: L; μ_θ(z_t^1: L, t, c), σ_θ(z_t^1:L,t)) Where c represents the concatenated music features. σ_θ(x_t, t) is set to a time-related constant, which is equal to 1-α̅_t-1/1-α̅_tβ_t, where α_t=1-β_t and α̅_t=∏_i=1^t α_i.This training process is parameterized by L2 loss, which is simplified by directly predicting the accumulated noise: L(θ)=𝔼_z_0, ϵ, t[ϵ-ϵ_θ(√(α̅_t) z_0+√(1-α̅_t)ϵ, t, c)^2] where t is sampled uniformly between 1 and N, ϵ∼𝒩(0,1), and ϵ_θ is the learned diffusion model. §.§ Genre Prediction NetworkExisting genre prediction methods are frequently hampered by a lack of generalization, attributable to the scarcity of large-scale music-dance datasets. To address this issue, we have incorporated the large-scale ChoreoSpectrum3D dataset into the Audio Spectrogram Transformer (AST) network <cit.> for finetune. Additionally, the AST network has been established on the foundation of ImageNet <cit.> and fine-tuned using training on the AudioSet dataset <cit.>. To begin, we process the music waveform by transforming it into a 2D spectrogram. This spectrogram is then divided into a sequence of non-overlapping patch blocks. Consequently, we linearly project each patch block into a one-dimensional patch embedding. Furthermore, each patch embedding incorporates a learnable positional embedding.The resulting patch sequence is then fed into a Transformer encoder, which generates high-level features for subsequent classification. This part of the network employs the pre-trained network and then performs fine-tuning when migrating to our dataset. Specifically, we introduce a simple CNN block to incorporate the output from the pre-training model. Utilizing skip connections, we combine this output with the output of the pre-training model. Finally, this combined output is passed through a final MLP classifier to predict the music genre category. The final music genre category is encoded using one-hot embedding, and the genre prediction network is optimized through supervised training using cross-entropy loss. § EXPERIMENTS§.§ Dance RepresentationWe represent dance motion sequences using two distinct forms: SMPL-based and HumanML3D-format. Both of them have been demonstrated to be effective when utilized in our network. We mainly use SMPL parameters to conduct experiments, and the HumanML3D-format experiments can be found in Supplementary Material. For SMPL format, we represent dances as sequences of poses in the 24-joint SMPL pose parameters using the 3*3 Euler rotation matric, along with and translation matrix X∈ℝ^24× 9+3=219.§.§ Evaluation MetricsConsistent with prior studies <cit.>, we assess the generated dance motions based on three key aspects: quality, diversity, and consistency.§.§.§ QualityThe quality of generated motion is evaluated using the Frechet Inception Distance (FID) <cit.> to calculate the distribution distance between generated and ground-truth motions, as done in prior works <cit.>. Two well-designed motion feature extractors <cit.> were used to measure FID: (1) a kinetic feature extractor that maps a motion sequence X to z_k ∈ℝ^72, representing kinetic aspects of the motion such as velocity and acceleration. (2) a geometric feature extractor that produces a boolean vector 𝐳_g ∈ℝ^33 representing geometric relations between specific body points in the motion sequence X ∈ℝ^T × N × 3. We denote the FID based on kinetic and geometric features as FID_k and FID_g^†, respectively.§.§.§ DiversityIt evaluates the variations among generated dances corresponding to music inputs, reflecting the model’s generalization ability and its dependency on the input music. The average feature distance is used as the measurement, with features extracted using the same classifier employed in measuring FID, as done in prior works <cit.>. Motion diversity in the kinetic and geometric feature spaces are denoted as Div_k and Div_g^†, respectively.§.§.§ ConsistencyTo evaluate the alignment between music and generated motions, the average temporal distance between each music beat and its closest dance beat is calculated as the Beat Align Score: 1/|B^m|∑_t^m ∈ B^mexp{-min _t^d ∈ B^dt^d-t^m^2/2 σ^2} where B^d and B^m record the time of beats in dance and music, respectively, while σ is the normalized parameter.§.§.§ Physical Foot Contact scoreEDGE <cit.> introduces a metric for assessing physical plausibility, the Physical Foot Contact score (PFC). This metric does not rely on explicit physical modeling but rather evaluates the rationality of physical motion by measuring the character’s acceleration and foot-ground contact. The underlying principle is that a character’s acceleration must result from static contact between the foot and the ground. §.§ Implementation DetailsThe Dance VAEemploys the Transformer architecture, comprising 9 layers and 4 heads in both the encoder and decoder, as well as residual connections. Music representation is extracted using Jukebox <cit.> and downsampled to match the frame rates of motion data for the AIST++ and ChoreoSpectrum3D datasets, at 60 FPS and 20 FPS, respectively. The music representation, with an initial feature dimension of 4800, is encoded to 512 dimensions through a two-layer Transformer encoder. The model is trained using the AdamW optimizer with a fixed learning rate of 10^-4. During training, the number of diffusion steps is set to 1K, while during inference it is set to 50. The variances β_t are linearly scaled from 8.5 × 10^-4 to 0.012. §.§ Comparision and ResultsWe conduct comparative experiments of several existing state-of-the-art methods, including FACT, Bailando, and EDGE, on the AIST++ and ChoreoSpectrum3D datasets. Given the disparate data division methods employed in the AIST++ dataset, we uniformly adopt a sliding window of 240 frames with a step size of 40 frames to obtain the final samples, yielding a total of 20,785 data samples with an action FPS of 60. For each method, we employ the identical data partition strategy. The quantification results on the AIST++ dataset are presented in Table <ref>. For the FACT model, we finetune the release checkpoint until convergence is achieved. For the EDGE model, which downsamples the original data to 30 FPS, we modify it to 60 FPS to maintain a fair comparison. For the Bailando model, we employ the default parameters from the open-source code. According to the Table <ref>, EnchantDance outperforms the best baseline model by 18.26 (39%) and 15.56 (67%) on FID_k and FID_g^†, respectively. These two values represent the physical and manual template features of the dance, respectively, directly reflecting the improvement in dance quality.EnchantDance also achieves the best results on Div_k, indicating that our method is capable of generating physically diverse dances. Although our method is only 0.55 (7%) worse than the best EDGE on Div_g^†, a comprehensive assessment of the improvement effect of FID_g^† reveals that our method can still keep pace with diversity while maintaining the best generation quality. Our method is slightly inferior to EDGE with respect to BAS.We also train and evaluate the SOTA method on the ChoreoSpectrum3D Dataset, with the quantitative results presented in Table <ref>. These results largely corroborate the conclusions drawn above. According to Tables <ref> and <ref>, while EDGE does not perform as well as other methods in FID_k, it frequently holds a leading position in BAS. Figure <ref> displays the visualization results, which demonstrate that the dance movements generated by EDGE are often relatively large, thus facilitating the detection of dance beats and contributing to an improvement in BAS. The red box in Figure <ref> indicates the number of frames with poor performance. It can be observed that EDGE often has a foot without touching the ground, leading to its suboptimal performance on FID_k, which primarily measures the motion's physical plausibility. EDGE also introduces PFC to measure the physics plausibility. As shown in Table <ref>, our results on the PFC are superior to those of the EDGE method, further confirming that EnchantDance can generate physically plausible dances. For the FACT model, it can be seen that due to the limitations of the autoregressive model, its limbs are in a stiff state in the second half of the dance. The FACT original text only uses 120 frames of seed motion as input and predicts 20 frames of data, which is insufficient for generating realistic long-term dances. For the yellow part, it can be observed that the dance movements corresponding to the ground truth have not changed significantly and can thus be considered to be within the same dance beat. The corresponding frames of our method and the EDGE method exhibit similar action changes, providing evidence that both EnchantDance and EDGE methods are leading in BAS. §.§ In-the-Wild MusicThe ability to generate generalizable dance motions is essential for application. To assess generalization, we employ two cross-dataset evaluation schemes. * Using the model trained on the AIST++ dataset to generate dances for music from the ChoreoSpectrum3D dataset.* Using the model trained on the ChoreoSpectrum3D dataset to generate dances for music from the AIST++ dataset. For our experiments, we select 1000 music samples from each dataset. Specifically, for the AIST++ dataset, we select 100 samples for each of its ten dance genres. For the ChoreoSpectrum3D dataset, we select 250 samples for each of its four dance genres. The results are shown in Table <ref> and Table <ref>. Our method outperforms other methods on out-of-set data, while the collapse of the Bailando model is more obvious. To concurrently validate the effectiveness of our dataset and method, we subject to testing on a diverse assortment of popular songs sourced from YouTube. §.§ Ablation StudyThe diffusion model of EDGE <cit.> and EnchantDance naturally set up an ablation study. EDGE predicts motions directly in the denoising process. In an effort to reduce action redundancy, we embraced a latent diffusion approach. Although EDGE can incorporate explicit constraints such as the contact consistency loss—addressing data artifacts like floating and jittery motion noted in the AIST++ dataset, as also mentioned in the "listen-denoise-action" <cit.>. EnchantDance not only reduces redundancy but also lessens the occurrence of data artifacts. As shown in Table <ref>, EnchantDance outperforms EDGE on several metrics. We also conducted two other ablation studies: 1) without utilizing the music genre feature, and 2) directly using the feature representation from the last layer of the AST model as the music genre feature concatenated with Jukebox features. Keeping the weights of the VQVAE unchanged, each model was trained on a single Tesla A100 GPU for 200 epochs, taking approximately 48 hours, and the evaluation results are shown in Table <ref>. The fine-tuned music genre features achieved top-1 performance across all metrics. Additionally, the visualization results are depicted in Fig. <ref>, where yellow boxes indicate ballet dances and red boxes indicate pop dances. The first row represents w/ genre, and the second row represents w/o genre; they each correspond to dance sequences generated from the same piece of music. We selected frames at identical timestamps for comparative analysis. It is evident that the iconic spinning movements in ballet are pronounced in the first row within the yellow box, and similarly, the characteristic side-step initiation movements in pop dances are prominent in the first row within the red box.§ DISCUSSIONMusic-to-dance involves generating motion from complex audio signals, which include features such as rhythm, tempo, pitch, and intensity. This differs from text-to-motion, which typically focuses on the semantics, presenting a challenge in physical signal parsing. EnchantDance employs Jukebox to extract the basic structure and melody of music, and also draws on the LLM fine-tuning approach Lora <cit.>, to design a music genre prediction network that captures the contextual information. The effectiveness of our music genre prediction network is validated in Table <ref> of the supplementary material, with the baseline network designed with reference to GTN-Bailando <cit.>. Although our approach is technically similar to the MLD, we found that applying the latent diffusion to the music-to-dance still leads to the existing diffusion method EDGE, as shown in Tables <ref> and <ref>. This remains insightful for the future design of new generative models in the music-to-dance domain.Moreover, the ChoreoSpectrum3D dataset is currently the largest reported dataset and is 14 times the size of the AIST++ dataset. Faced with such a vast motion dataset, the EDGE method, which implements the diffusion process directly on the original motion, inevitably involves a significant amount of motion redundancy. In contrast, our method first compresses the dance motions to obtain dense motion features. In Bailando, motion compression was first performed using VQVAE Similarly, and as shown in Table 3, the dance quality generated by Bailando also exceeds that of EDGE. This indicates that motion compression is necessary when dealing with large datasets. In Tables 5 and 6, the ChoreoSpectrum3D dataset is extensive enough to encompass the distribution of AIST++, resulting in data generated after training on the ChoreoSpectrum3D dataset falling outside the distribution of AIST++, thereby leading to particularly high FID scores. Conversely, data generated after training on the AIST++ dataset remains within the distribution of ChoreoSpectrum3D, which does not exhibit this issue. This is indicative of a problem with the current music-to-dance FID metric, which measures the distance between the data distribution on the test set and the overall dataset distribution.§ CONCLUSION AND FUTURE WORKIn this paper, we present a latent-based method combined with a genre prediction network for generating plausible dance sequences that conform to music. Compared to the existing SOTA methods, our EnchantDance method produces more realistic and diverse dances, with superior generalization performance on out-of-set data. Additionally, we have released a large-scale music-dance dataset containing 70.32 hours of well-paired data, which is currently the largest music-dance dataset available and is crucial for enhancing the generalization ability of the model and generating high-quality dance movements. While EnchantDance does not currently support dance generative editing, it has great potential for generative capabilities and future work will explore dance editing, such as the editing mode implemented in EDGE <cit.>.§ SUPPLEMENT EXPERIMENTS OF WILD MUSICTo further validate the effectiveness of our dataset and method, we conducted experiments on a diverse assortment of popular songs sourced from YouTube. These songs were used for our in-the-wild music demonstrations, following the same music as EDGE <cit.>.* Doja Cat -Woman* Luis Fonsi - Despacito ft. Daddy Yankee* ITZY - LOCO* Saweetie - My Type As the dance movements generated on wild-music do not provide FID and Diversity evaluation values, we conduct a quantitative evaluation on the BAS and PFC metrics. The evaluation results are presented in Table <ref>. As can be observed, the generalization ability of the model trained on the ChoreoSpectrum3D dataset significantly surpasses that of the model trained on the AIST++ dataset. Furthermore, we conduct a comparison with the EDGE model, which further verified the superior generalization ability of our EnchantDance model. The visualization results are saved in the Demo Folder.§ EXPERIMENTS OF GENRE PREDICTION NETWORKThe music data in the AIST++ dataset <cit.> contains a large number of drums, resulting in a consistent overall music style. The dance genres in the AIST++ dataset include Break, Pop, Lock, Middle Hip-hop, LA style Hip-hop, Waack, Krump, Street Jazz, Ballet Jazz, and House. Based on the suggestions of professional musicians and dancers, we classified Breaking, Locking, Hip-hop, Popping, Urban, and Jazz as Pop dances due to their similar corresponding music styles.In contrast to our ChoreoSpectrum3D dataset, the AIST++ dataset contains a large number of music and dance types within the same genre, Pop. Additionally, the AIST++ dataset only contains 60 unique pieces of music, which is not conducive to training a music genre prediction network. In our ChoreoSpectrum3D dataset, dance genres are uniformly divided into Pop (Breaking, Locking, Hip-hop, Popping, Urban, and Jazz), Ballet, Latin, and House dance. Their corresponding music styles are also distinct from one another and suitable for music genre prediction.Therefore, we trained the music genre network on the ChoreoSpectrum3D dataset and used the music data from the AIST++ dataset as out-of-set test data.Our music genre prediction network is based on the large-scale Audio Spectrogram Transformer (AST) model <cit.>. We employ a simple and learnable CNN Block to fuse the prior knowledge of the large-scale model and fine-tune it on the ChoreoSpectrum3D dataset. The input to the AST model consists of 10 seconds of audio, which is consistent with the music data in our ChoreoSpectrum3D dataset, thus requiring no additional processing. To compare with our genre prediction network, we constructed a baseline model, including an audio classification network baseline built by concatenating four ResNet blocks with a kernel size of 3 and a channel count of 50. Each method was trained for a total of 100 epochs using the Adam optimizer with a learning rate of 0.0001. For each method, we conducted ten training runs with different random seeds. We used the mean and standard deviation of the accuracy rate as evaluation metrics. The quantitative evaluation results are presented in Table <ref>. Our method achieves effective prediction results on both in-set and out-of-set data.§ IMPLEMENT DETAILS ON FACT AND BAILANDOIn our study, we utilize the official implementation of the FACT model <cit.>. Upon inspection, it was observed that the published checkpoints for the FACT model were not fully trained. Consequently, we re-trained the model from the released checkpoint until convergence was achieved. For the AIST++ dataset, we employ the same methodology as described in the original paper. Specifically, a seed motion of 120 frames was used, with 20 frames of dance movements being predicted at each iteration. For the ChoreoSpectrum3D dataset, we involved using a seed motion of 100 frames and predicting 20 frames of dance movements at each iteration.We also utilize the official implementation of the Bailando <cit.>. For the AIST++ dataset, due to the different data segmentation strategies, we retrained the model without using the released checkpoint. Similarly, for the ChoreoSpectrum3D dataset, we also performed retraining. The final dance expressions are keypoints.For music conditional features, FACT uses the publicly available audio processing toolbox Librosa <cit.> to extract the music features including: 1-dim envelope, 20-dim MFCC, 12-dim chroma, 1-dim one-hot peaks, and 1-dim one-hot beats, resulting in a 35-dim music feature. The music features of Bailando are also extracted by the toolbox Librosa, including mel frequency cepstral co-efficients (MFCC), MFCC delta, constant-Q chromagram, tempogram and onset strength, which are 438-dim in total.§ CHOREOSPECTRUM3D DATASETWe have built the largest music and dance data set so far, including four genres: Pop, Ballet, Latin, and House dance. The examples are shown in the figure below. | http://arxiv.org/abs/2312.15946v1 | {
"authors": [
"Bo Han",
"Yi Ren",
"Hao Peng",
"Teng Zhang",
"Zeyu Ling",
"Xiang Yin",
"Feilin Han"
],
"categories": [
"cs.SD",
"cs.GR",
"eess.AS"
],
"primary_category": "cs.SD",
"published": "20231226081910",
"title": "EnchantDance: Unveiling the Potential of Music-Driven Dance Movement"
} |
Expressivity and Approximation Properties of DeepNeural Networks with ReLU^k Activation Juncai He[1] Tong Mao[1] Jinchao Xu[1] [2]========================================================================================= Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice. Recently, to alleviate expensive data collection, co-occurring pairs from the Internet are automatically harvested for training. However, it inevitably includes mismatched pairs, , noisy correspondences, undermining supervision reliability and degrading performance. Current methods leverage deep neural networks' memorization effect to address noisy correspondences, which overconfidently focus on similarity-guided training with hard negatives and suffer from self-reinforcing errors. In light of above, we introduce a novel noisy correspondence learning framework, namely Self-Reinforcing Errors Mitigation (SREM). Specifically, by viewing sample matching as classification tasks within the batch, we generate classification logits for the given sample. Instead of a single similarity score, we refine sample filtration through energy uncertainty and estimate model's sensitivity of selected clean samples using swapped classification entropy, in view of the overall prediction distribution.Additionally, we propose cross-modal biased complementary learning to leverage negative matches overlooked in hard-negative training, further improving model optimization stability and curbing self-reinforcing errors. Extensive experiments on challenging benchmarks affirm the efficacy and efficiency of SREM. § INTRODUCTION Cross-modal matching aims to retrieve relevant samples across different modalities, which has become a focal research area due to the prevalence of multimedia data. Contemporary methods achieve semantic alignment using modal-specific encoders <cit.>.They project data into a unified feature space, where matched data from different modalities are drawn together, while mismatched ones are pushed apart. To alleviate the laborious collection of well-matched data, recent datasets <cit.> automatically collect co-occurring sample pairs from the Internet for training. However, they contain around 20% mismatched pairs<cit.>, namely noisy correspondences. Encouraging these mismatched pairs to be similar will significantly degrade the matching performance. Recent advancements <cit.> have tackled noisy correspondences through deep neural network (DNN) memorization. This effect enables clean samples to exhibit higher similarities than noisy ones after the initial few epochs <cit.>. Specifically, after warmup, these methods further refine similarity prediction with the following alternate steps: 1) Using similarity scores to identify clean samples.2) Deriving soft margins proportional to similarity scores for robust matching of selected clean samples.The soft margins are employed in a hinge-based ranking loss, where a larger margin intensifies the model's sensitivity towards differentiating the given sample from its negatives. However, Figure 1(a) shows that such an approach is susceptible to self-reinforcing errors. The primary vulnerability arises from the fact that the aforementioned two steps, clean sample selection and corresponding sensitivity estimation, rely heavily on the model's similarity prediction. This leads to a critical issue where confident but incorrect similarity predictions are amplified during subsequent training, forming a loop of self-reinforcing errors <cit.>. Furthermore, hinge-based ranking loss solely focuses on the query's positive and hard negative sample, overlooking a wealth of negative information.Figure 1(b) shows that this narrow focus can result in suboptimal model optimization, potentially aggravating self-reinforcing errors. In light of above, we propose a novel noisy correspondence learning framework, namely Self-Reinforcing Errors Mitigation (SREM).Specifically, SREM encompasses three core modules:1)We introduce a novel energy-guided sample filtration to complement conventional similarity-based sample filtration. We first produce classification logits for a sample by viewing sample matching as a classification task within the batch. We then use energy scores derived from classification logits to gauge the model's uncertainty during sample selection. As a result, this strategy ensures the selected clean samples maintain both high similarity and low uncertainty, paving the way for more precise data division. 2) We propose a Swapped Gradient Weighting (SGW) strategy. SGW assesses the model's sensitivity towards individual samples by leveraging swapped classification entropy, ensuring robust matching of selected clean samples. Samples with lower entropy suggest higher prediction confidence, thus the model should be more sensitive to them and let them contribute more to optimization <cit.>. In contrast to a single similarity score, classification entropy considers the model's prediction distribution over both clean and negative samples, ensuring robustness. 3)We introduce a novel Cross-Modal Biased Complementary Learning (CMBCL) objective for leveraging negative samples overlooked in the hinge-based ranking loss. We perceive these overlooked negative matches as “complementary labels" that essentially signal non-matching samples, guiding the model to distance positive samples from all negatives and thus circumventing potential self-reinforcing errors.Extensive experiments highlight the substantial improvement achieved by SREM, surpassing state-of-the-arts by more than 1% in average recall.Moreover, SREM also boasts a reduction in training time by more than 40%, attesting its efficiency. In addition to empirical validations, we theoretically prove the efficacy of CMBCL, as it converges to an optimal classifier equivalent to one trained with true labels. We also highlight its generality, demonstrating that CMBCL encompasses the previously strong competitor RCL <cit.> as a special case.§ RELATED WORK §.§ Cross Modal RetrievalCross-modal matching aims to project images and texts into a unified feature space where matched data from different modalities are similar while mismatched data are dissimilar. This matching can be globally <cit.>, by matching images and text from a comprehensive perspective, or locally <cit.>, connecting specific regions within images to words in sentences for a more granular alignment.Contrary to previous approaches that presuppose well-matched training data, the prohibitive collection costs have fostered the emergence of new paradigms like noisy correspondences, a prevalent issue in domains such as person re-id <cit.>, graph matching <cit.>, and multi-view learning <cit.>. Current methods in cross-modal matching <cit.> primarily employ multi-step frameworks: They first estimate the distribution of instance-level loss/similarity across the entire dataset.Then they compute the posterior probability as the pseudo-label for each sample, which is further filtered by a threshold and clean samples are used for training. To eliminate additional computation overhead caused by similarity distribution estimation, DECL <cit.> uses similarity with evidential learning to dynamically filter out noisy correspondences within each batch.However, similarity-guided training in previous methods lead to self-reinforcing errors. In contrast,our SREM addresses overconfidence in similarity scores through overall prediction distributions, effectively mitigating such errors and notably enhancing performance. §.§ Complementary Label LearningUnlike conventional classification tasks, samples in complementary label learning (CLL) are assigned complementary labels that indicate classes they do not belong to. To effectively use these weak supervisions, <cit.> assume the uniform distribution of complementary labels and prove an optimal classifier can be learned with mere complementary labels.Differently, some works <cit.> consider the unknown distribution of complementary labels.By estimating label transition probabilities, they inferred the distribution of complementary labels and subsequently refined them for training. In noisy correspondence learning, RCL <cit.> extends CLL to introduce a novel contrastive learning framework that exclusively leverages negative information, mitigating the potential negative effects of mismatched samples. However, the neglect of powerful positive supervision leads to suboptimal results for RCL. On the contrary, beyond using positive supervision inranking loss, we additionally leverage the dissimilarity of negative samples to utilize negative information more effectively, therefore achieving a more robust training regime against noisy correspondence.§ METHODOLOGY §.§ Problem DefinitionIn line with previous works, we use image-text retrieval as a proxy task to explore the noisy correspondence in cross-modal matching, which consists of two sub-tasks: image-to-text (i2t) and text-to-image (t2i) retrieval. Typically, we are provided with a training dataset 𝒟={(I_i,T_i,m_i)}_i=1^N, where N is the data size and (I_i,T_i,m_i) is the i-th image-text pair (I_i,T_i) with label m_i∈{0,1} indicating whether they are matched. In noisy correspondence, an unknown portion of pairs in 𝒟 is mismatched, , the image and text are not matched but with matched labels. §.§ Model OverviewIn this section, we present our SREM in detail, whose overview is shown in <Ref>.For simplicity, we take image-to-text retrieval as a showcase to introduce the pipeline of SREM, while text-to-image retrieval is conducted in a symmetric manner. Initially, the feature encoder generates similarity logits from the input pair.Then, we employ three elaborately-designed modules to mitigate the self-reinforcing errors during training. Given the disparities in prediction distribution, we utilize energy uncertainty to segregate clean samples, denoted as 𝒟_clean, from noisy correspondences, 𝒟_noisy. To enhance SREM's robustness, we introduce the swapped gradient weighting and cross-modal biased complementary learning framework. The former proposes a gradient-rescaled ranking loss L_w, while the latter effectively leverages the overlooked negative matches in L_w as complementary labels. We will detail each component and corresponding optimization objective in what follows.§.§ Feature EncoderInitially, the feature encoder projects both visual and textual data into a unified feature space using model-specific encoders f and g, respectively. Within the unified feature space, a function h computes the similarity logit as F_ij = h(f(I_i),g(T_j)) ( h(I_i,T_j) for short ), where the corresponding similarity score is defined as S_ij=σ(F_ij).Here, σ(·) denotes the sigmoid activation function.§.§ Energy-Guided Sample Filtration Our objective is to circumvent the pitfalls of previous methods that overconfidently divide samples with similarity prediction, thereby introducing potential sample selection risk. Take the similarity scores [0.85, 0.80, 0.82] as an example: the first score represents the given sample pair, while the others correspond to its negative samples. Even though the given sample pair exhibits a high similarity score, it is not significantly different from the negative samples, suggesting a possible mismatch. Hence, selecting such a sample pair as “clean” based solely on similarity can be risky.To this issue, by considering the overall prediction distribution, we aim to explore sample selection uncertainty to complement similarity-based sample filtration. Given abatchsize B, we first generate the classification logits F_i of the visual input I_i by viewing sample matching as a classification task within the batch. F_i is formulated as F_i={F_i1,⋯,F_iB} with a corresponding label y_i=i. Due to DNN's memorization effect, the model initially becomes adept at recognizing clean samples, leading to an unimodal distribution at y_i.In contrast, model struggles to differentiate noisy correspondences from their negatives, also giving rise to a more uniform distribution.In view of such difference, we turn to energy uncertainty in logits space, which is a widely acceptable metric in the literature of uncertainty learning <cit.>. Specifically, the energy uncertainty corresponding to the visual input I_ican be calculated by:Energy(I_i)=-log∑_b=1^B e^F_ib.Intuitively, more uniformly distributed prediction (, noisy correspondence) leads to higher estimated energy uncertainty <cit.>. Therefore we select the clean samples by applying a threshold τ and the maximum similarity constraint <cit.>, ,𝒟_clean = {i|Energy(I_i)<τ andy_i=max_jF_ij},while𝒟_noisy refers to mismatched samples. In this sense, the selected samples maintain both low uncertainty and high similarity, paving the way for more precise sample division.Moreover, we conceive an energy-bounded loss L_u^I to reduce the energy uncertainty of clean samples while enhancing that of noisy samples, enlarging the margin between matched and mismatched samples, ,L_u^I =𝔼_i∼𝒟_clean [0, Energy(I_i)-m_clean]_+^2 +𝔼_i∼𝒟_noisy [0, m_noisy-Energy(I_i)]_+^2,where [x]_+=max(x,0); m_clean and m_noisy are separate margins that penalize the clean (noisy) samples with energy uncertainty higher (lower) than the given margin.§.§ Swapped Gradient WeightingAfter sample filtration, it is risky to directly train the model on 𝒟_clean as it potentially contains some false positives <cit.>. To ensure robust training, it's crucial to devise strategies that allow the model to adaptively maintain varied sensitivities to samples within 𝒟_clean. Instead of overconfident single similarity score, we introduce classification entropy to estimate sensitivity of each clean sample. Visual inputI_i's classification distribution is defined as P_i=softmax(F_i), and the corresponding normalized classification entropy e(P_i) is formulated as:e(P_i)=-∑_j=1^B(P_ijlog P_ij)/log B.Here log B is the maximum entropy to scale e(P_i) into [0,1] for numerical stability. In this sense, low e(P_i) highlights the model's ability to recognize matched samples, while suppressing similarity scores to other negative samples. Consequently, model should be more sensitive to samples with lower e(P_i) in optimization <cit.>.In light of above, let w_i^I denote the entropy-based model's sensitivity to visual input I_i in i2t retrieval, formulated by:w_i^I = 1 - e(P_i)1(α-S_ii+σ(h(I_i,T_ϕ(i)))),where α>0 is the expected margin between positive and negative match; ϕ(i)=max_j≠ i(F_ij) and T_ϕ(i) is the hard negative text of I_i, , the negative text most similar to I_i within the batch.Moreover, we employ indicator function 1(·) to evaluate whether a sample and its hard negative have expected discrimination α. This design avoids unnecessary gradients on samples exhibiting satisfactory discrimination, reducing the risk of overfitting. Besides, we further employ swapped prediction strategy on calculated e(P_i), which is widely used in cross-modal tasks for improving robustness <cit.>. Its key idea is to use the weights derived from one modality for the other modality, promoting cross-modal consistency in the learning process. For example, we use w_i^T derived from t2i classification entropy for i2t retrieval training, and vice-versa.Specifically, we apply w_i^T withhinge-based ranking loss, defined as:L_w^i2t=𝔼_i∼𝒟_clean[α-w_i^TS_ii+σ(h(I_i,T_ϕ(i)))]_+.As a result, the derivative of L_w^i2t with respect to model parameters θ is given by the chain rule ∂ L_w^i2t/∂θ=∂ L_w^i2t/∂ S∂ S/∂θ with-∂ L_w^i2t/∂ S_ij={ w_i^T,j=i-1,j = ϕ(i)0, otherwise ..<Ref> implies that clean samples exhibiting more certain distributions will retain larger gradients, consequently to which model is more sensitive. Compared to sample-reweighting methods <cit.>, our SGW strategy further suppresses similarity scores to hard negatives as -∂ L_w^i2t/∂ S_iϕ(i)=-1.Thus, <Ref> can effectively adjust model's sensitivity of different samples in optimization, enhancing matching robustness.§.§ Cross-Modal Biased Complementary LearningEvidently, <Ref> highlights that L_w^i2t overlooks numerous negative similarities defined as:{S_ij| j≠ i;and ifi ∈𝒟_clean, j ≠ϕ(i)}.These overlooked negative similarities maintain zero gradients and are ignored in model optimization. However, in classification, these overlooked similarities indicate the samples that do not match the given sample, , complementary labels. As shown in <Ref>, harnessing these complementary labels can enhance the stability of the model optimization. In this sense, we construct an auxiliary dataset 𝒟_neg={(i, 𝒴̅_i)}_i=1^B within each batch.Here i is the index of given image I_i within batch, 𝒴̅_i is corresponding complementary labels formulated as:𝒴̅_i={0, 1,⋯,B-1}∖{y_i }. Furthermore, we explore non-uniformly distributed complementary labels to improve model's generality, due to the following facts: 1) Ideal uniformly distributed complementary labels do not necessarily hold in real-world data, particularly in instance-level classification. 2) Non-uniform complementary labels permit model to focus more on the harder negatives, thereby preventing informative supervision from being overwhelmed by redundant negative samples.Inspired by <cit.>, we prefer to choose negative texts that have higher similarity to I_i as the complementary label, enabling model to focus more on challenging and informative negative counterparts.Notably, we directly use similarity to estimate the selection probability of complementary labels, due to the fact that complementary labels are leveraged to suppress negative information and do not involve self-reinforcing errors. Specifically, we employ MS <cit.> to gauge the likelihood of selecting T_j as I_i's complementary label. MS considers both self and relative similarities, defined as:P̅_ij^i2t= e^β(S_i j-b)/1+∑_k ∈𝒴̅_i e^β(S_i k-b),where β and b are two hyperparameters of Binomial deviance <cit.>, controlling the smoothness of selection distribution. Note that selected hard negatives from 𝒟_clean have already been considered in L_w^i2t, we exclude these samples to prevent their over-representation in the model training process, which is formulated by:P̅_iϕ(i)^i2t=-∞, ∀ i∈𝒟_clean.We then rectify complementary labels using the overall selection probability <cit.>, ,S^' = softmax(P̅^i2t)^TS.Ultimately, the cross-modal biased complementary learning objective L_c^i2t on 𝒟_neg is formulated as:L_c^i2t=-𝔼_(i,𝒴̅_i)∼𝒟_neg𝔼_j∼𝒴̅_i[log(1-S^'_ij)].Theoretical Analyses We provide theoretical evidence to better elucidate the effectiveness of our CMBCL. Specifically, we demonstrate CMBCL's efficacy in <Ref> as:Given sufficient data with complementary labels, minimizing <Ref> can yield the optimal classifier equivalent to that trained with the true labels.Additionally, CMBCL, by considering complementary labels' distribution, exhibits superior generalizability.In detail, CMBCL generalizes previous strong competitor <cit.> as a special case. §.§ Model OptimizationTo ensure consistent performance across modalities, we employ SREM for bidirectional matching, encompassing both image-to-text and text-to-image tasks, formulated by:min_θ L=0.5(L_w^i2t+L_w^t2i)+λ_1(L_u^I+L_u^T)+λ_2(L_c^i2t+L_c^t2i),where L_u^T, L_w^t2i, and L_c^t2i represent objectives when symmetrically applying energy-guided sample filtration, SGW, and CMBCL for text-to-image retrieval. λ_1, λ_2∈[0,1] are hyperparameters to adjust the effect of energy uncertainty estimation and negative information utilization. § EXPERIMENTS §.§ Experiments Setting Datasets Following previous works, we evaluate our SREM using three image-text retrieval datasets, including COCO <cit.>, Flickr30K <cit.> and CC152K <cit.>. The first two are well-annotated, while the third one is harvested from the internet. We provide an overview of the dataset details as follows: * COCO and Flickr30K contain 123287 and 31783 images with 5 corresponding captions per image, respectively. Following <cit.>, we maintain 5K/5K and 5K/5K image-text pairs for validation/test, leaving the remainder for training.* CC152K is a subset of Conceptual Captions <cit.> containing 152K image-text pairs. We use 150K pairs for training, 1K for validation and another 1K for testing.Evaluation Metrics Following previous work <cit.>, we evaluate SREM with the recall rate at K (R@K) that measures the proportion of relevant items found within the top K results of a ranked list. By querying both images and texts, we report corresponding results of R@1, R@5 and R@10, which are further summed to evaluate the overall performance, , R_sum. Implementation Details As a plug-and-play module, our SREM can be seamlessly applied in various image-text retrieval methods to improve their robustness against noisy correspondences. Here, we adopt the same backbone, SGRAF <cit.>, with the same training settings as <cit.> for fair comparisons. Specifically, we warm up the model for 5 epochs with L_c^i2t and L_c^t2i to achieve initial convergence, followed by a 50 epochs training process. We employ a batch size of 128 and an Adam <cit.> optimizer with a learning rate of 2e-4 that will be decayed by 0.1 after 25 epochs.§.§ Comparison with State-Of-The-ArtWe compare the proposed SREM against current state-of-the-art (SOTA) methods to demonstrate its effectiveness, including general image-text retrieval methods SCAN <cit.>, VSRN <cit.>, IMRAM <cit.>, SGR, SAF<cit.>, and noisy correspondence robust methods NCR <cit.>, DECL <cit.>, MSCN <cit.>, BiCro <cit.> and RCL <cit.>.§.§.§ Results on Synthetic Noise of Flickr30K and MS-COCOAs in previous works, we emulate noisy correspondences by randomly shuffling the training images and captions for specific noise ratios.We report results with noise ratio 20%, 40%, 60% for comprehensive comparison with current SOTAs, such as MSCN and BiCro. <Ref> details the results of Flickr30K and MS-COCO on different noise ratios, where the results of MS-COCO are averaged on 5 folds of 1K test images as in previous works. We find that the strong noise-robust competitors, , MSCN and BiCro, achieve markedly better results than general image-text retrieval methods, highlighting the necessity of designing models that can effectively withstand noise. However, they introduce strong priors, , 3% additional clean samples for MSCN and extra model ensemble for BiCro, resulting in costly data collection and computation overhead, respectively. More troublingly, as the noise ratio increases, the performance of these methods deteriorates drastically due to self-reinforcing errors. In contrast, our SREM, devoid of any such priors, is more effective and stable, improving R_sum by more than 1% on average. §.§.§ Results on Real-World Noise of CC152KCC152K, automatically harvested from the Internet, inherently contains approximately 20% noisy correspondences.It thereby can be used to evaluate SREM's ability in handling real-world noise.We train and evaluate SREM without introducing any additional synthetic noise. <Ref> shows that SREM performs commendably even without any priors. Specifically, it outperforms the strongest competitors MSCN and BiCro by an average of 1% in R_sum. Besides, SREM consistently and significantly triumphs over all baselines in all results, except for R@1 of retrieving images. These results demonstrate SREM's appealing efficacy in real-world scenarios. §.§ Ablation StudiesThis section conducts ablation studies to comprehensively evaluate the performance of each component in SREM. §.§.§ Component Analyses<Ref> shows that vanilla trained model exhibits suboptimal performance, illustrating its susceptibility to disturbances caused by noisy correspondences. The energy-guided sample filtration significantly enhances the performance by more than 10% on R@1. When using swapped gradient weighting, we observe performance boosts in all results, except R@5 for text retrieving. Furthermore, leveraging unused negative information as complementary labels considerably improves performance, evidenced by an increase of more than 1% in R_sum. The rectification of complementary labels further enhances performance, validating the efficacy of considering complementary label distributions. These results underline the significant role of complementary labels in fortifying retrieval robustness. The best performance is achieved with all proposed components, demonstrating their efficacy.§.§.§ Visualization on Energy Uncertainty<Ref> visualizes energy uncertainty during training. As training progresses, the energy uncertainty of clean samples becomes lower while that of noisy correspondences increases, manifesting a clear polarizing trend. These observations validate the efficacy of energy uncertainty estimation for noisy correspondences. Therefore, the energy uncertainty from the overall prediction distribution can naturally be used to differentiate between noisy and clean pairs, further boosting the robustness against noisy correspondences. §.§.§ Visualization on Self-Reinforcing ErrorsWe track the training progress to validate the efficacy of our SREM in alleviating self-reinforcing errors. Specifically, we measure the performance of each epoch, as well as noisy gradients ratio relative to all positive gradients (the proportion of false positive gradients created by enhancing mismatched samples' similarity in L_w). We also provide the results of MSCN and BiCro for more comprehensive and fair comparisons. As shown in <Ref>, since CMBCL during warmup avoids self-reinforcing errors, SREM starts with the lowest noisy gradient ratio. While in training, with its carefully designed components, SREM effectively suppresses self-reinforcing errors, exhibiting a significantly lower and stable noise gradient ratio, , less than 7%. In contrast, MSCN and BiCro start with higher noise gradient ratios that rapidly increase in training due to their similarity-based training with hard negatives. As a result, SREM achieves better performance with stable optimization, while MSCN and BiCro exhibit unsatisfactory results, whose performance gradually drops with noisy gradient ratio increasing. These results highlight the efficacy of SREM in alleviating self-reinforcing errors.§.§.§ Efficiency Analyses It is noteworthy that SREM also maintains superior efficiency. Specifically, <Ref> records the training overhead per epoch on CC152K using an NVIDIA Tesla A40 48G. The training time of MSCN and BiCro contains two parts as they first pre-compute similarity across the entire dataset and then conduct sample filtration before training. These steps incur additional computation and storage overhead. Moreover, MSCN computes meta gradients for model optimization and BiCro rectifies soft correspondences via numerous anchor samples, both of which are computationally expensive and thus further diminishing efficiency. Differently, our SREM not only eliminates the pre-computation but also employs computationally efficient techniques, , energy uncertainty, entropy and complementary learning. Consequently, SREM reduces the training time by more than 40%, highlighting its efficiency and potential applicability to large-scale datasets.§.§.§ Detected Noisy Correspondences in Real-World Scenario<Ref> shows some real-world noisy correspondences in CC152K detected by our SREM with their corresponding energy uncertainty. Specifically, SREM is not limited to recognizing only obvious noisy pairs containing completely irrelevant information.It also can identify hard mismatched pairs with subtle semantic misalignment, , the missing elements of the phone, hands, steak and fountain, , as well as the incongruence between concepts like “building" and “industry". These results qualitatively demonstrate SREM's efficacy, revealing its promise for handling real-world applications.§ CONCLUSIONThis paper presents a novel framework, SREM, to address the challenges of noisy correspondences in cross-modal matching. Using per-sample classification logits, SREM ingeniously employs energy uncertainty to filter out the noisy correspondences, paving the way for more precise data division. It then applies swapped classification entropy to recalibrate gradients, offering a more nuanced approach to assessing model's sensitivity in sample matching, compared to single similarity scores. Moreover, the CMBCL framework within SREM harnesses previously overlooked negative information, ensuring stable model optimization. Both theoretical evidence and extensive experiments on challenging benchmarks corroborate SREM's superiority in efficacy, efficiency and generality. We hope our SREM will drive improvements in both the efficacy and efficiency of noisy correspondence learning, providing new insights into building more robust cross-modal information retrieval systems.§ ACKNOWLEDGEMENTThis work is supported by the National Key Research and Development Program of China (No. 2022YFB3102600), National Nature Science Foundation of China (No. 62192781, No. 62272374, No. 62202367, No. 62250009, No. 62137002), Project of China Knowledge Center for Engineering Science and Technology,Project of Chinese academy of engineering “The Online and Offline Mixed Educational Service System for `The Belt and Road' Training in MOOC China”, and the K. C. Wong Education Foundation. | http://arxiv.org/abs/2312.16478v1 | {
"authors": [
"Zhuohang Dang",
"Minnan Luo",
"Chengyou Jia",
"Guang Dai",
"Xiaojun Chang",
"Jingdong Wang"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20231227090343",
"title": "Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation"
} |
A Quantum Approach to solve N-Queens Problem Santhosh G S 1, PiyushJoshi2, Ayan Barui3, and Prasanta K. Panigrahi3,* 1 Sri Sivasubramaniya Nadar College of Engineering,Rajiv Gandhi Salai (OMR), Kalavakkam, 603110, Tamil Nadu, India2 Indian Institute of Space Science and Technology, Valiamala, Thiruvananthapuram, 695547, Kerala, India3 Indian Institute of Science Education and Research Kolkata, Mohanpur, 741246, West Bengal, India============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this work, we have introduced two innovative quantum algorithms: the Direct Column Algorithm and the Quantum Backtracking Algorithm to solve N-Queens problem, which involves the arrangement of N queens on an N × N chessboard such that they are not under attack from each other on the same row, column and diagonal. These algorithms utilizes Controlled W-states and dynamic circuits, to efficiently address this NP-Complete computational problem. The Direct Column Algorithm strategically reduces the search space, simplifying the solution process, even with exponential circuit complexity as the problem size grows, while Quantum Backtracking Algorithm emulates classical backtracking techniques within a quantum framework which allows the possibility of solving complex problems like satellite communication, routing and VLSI testing.N-Queens, Quantum Algorithms, Controlled W-State § INTRODUCTION The field of computer science presents various computational problems, which encompass tasks, questions, or challenges that demand computer-based or algorithmic solutions. These problems exhibit a wide spectrum of complexity, scope, and applications, making them fundamental in the realm of computer science <cit.>. Computational problems are often categorized into different complexity classes, based on the computational resources required for their efficient solutions. One prominent class of problems is NP-Complete (Non-Deterministic Polynomial Complete) problems. In this context, 'P' stands for Polynomial time complexity, indicating that problems in class 'P' can be solved in polynomial time. Conversely, 'NP' stands for Non-deterministic Polynomial, and it represents a class of problems for which a proposed solution can be verified in polynomial time. However, it remains an open question whether all problems in 'NP' can also be solved in polynomial time (i.e., whether P equals NP). Consider NP-Complete problems, which form a subset of the 'NP' class. NP-Complete problems share a common characteristic: if a single problem from this class is solved, it implies that all problems within this category can be considered solved. These problems have a rich history in the field of computer science, posing significant challenges and opportunities for exploration. In this paper, we will focus on our efforts to address a well-known NP-Complete problem, the notorious N-Queens problem <cit.>. We will detail our approach and findings related to this intriguing challenge.The paper's structure is outlined as follows: Section <ref> defines and elaborates on the intricacies of the N-Queens problem. In Section <ref>, we briefly explain the approach and the solution presented in the previous paper. Section <ref> encompasses the novel improvements that have been integrated into the previous work. In Section <ref>, we explore two novel algorithms, each employing varied strategies to tackle this problem. Finally, we conclude the paper by summarizing our findings and presenting future discussion points. § N-QUEENS PROBLEM The N-Queens problem is a classic example of an NP-Complete problem, known for its challenging nature. Since the 1850s, first emerging as a mathematical recreation, the problem has seen numerous published attempts <cit.>.Pauls first established the existence of at least one solution for the generalized problem when N>3 and multiple proven bounds suggest the number of solutions increasing exponentially with increasing N <cit.>.On the other hand, the problem is well-suited for search-based algorithms. For instance,the backtracking search algorithm <cit.>, which employs a recursive approach, can generate all potential solution sets for an N × N board. Imagine a world where the chessboard's size is not limited to the traditional 8×8 squares; it can be of any dimension as long as it remains a square. We will refer to it generally as the N× N Chessboard. In this world, the only chess pieces available are Queens, meaning that Queens are the sole occupants of the chessboard. With this setup in mind, let us consider a fundamental question: How many Queens can be placed on an N× N Chessboard, such that none of the Queens attack each other? The answer is N Queens. In an N× N chessboard, a maximum of N Queens can be placed in such a way that they do not threaten each other. However, not all arrangements of Queens satisfy this condition. Only specific arrangements allow all the Queens to coexist harmoniously. This problem, known as the N-Queens problem, challenges us to find all the valid combinations of placing N Queens on an N× N chessboard.In practice, however, this algorithm faces challenges in finding substantially distinct solutions within the solution space due to its high time complexity of 𝒪(N!) <cit.>. This complexity is what continues to make the N-Queens problem a subject of study and exploration <cit.>. From Table <ref>, it becomes evident that there is no straightforward way to model the relationship between the N× N chessboard and the corresponding number of solutions. In the case of the N-Queens Problem, it has been demonstrated that finding all the solutions is beyond the #P-class <cit.>.This, in turn, confirms the absence of a closed-form solution for the problem and motivates computer searches to be particularly helpful. However, classical computational methods are very limited in addressing such challenges due to the need to process large datasets and explore extensive solution spaces. In recent years, quantum technology has shown rapid advancements to approach computational quantum advantage for different numerical tasks <cit.>, particularly optimization problems (demonstrated recently even on near-term devices) Until today, numerous algorithms have been developed to solve the N-Queens problem, each with varying time complexities. While we won't delve into every algorithm, let's briefly explore a few notable ones. The first algorithm is the Backtracking algorithm, which has a worst-case time complexity of O(N!) <cit.>. It is a classic approach for solving the N-Queens problem. The next algorithm, introduced by Sosic and Gu in 2010, is a linear-time algorithm. It employs a pattern-based approach to produce at least one unique solution for all values of N greater than 3. The time complexity of the Sosic and Gu algorithm is O(N), a significant improvement over the Backtracking algorithm's time complexity <cit.>. Over the years, many efforts have been made to solve the N-Queens problem in the quantum regime. A quantum-inspired differential evolution algorithm for solving the N-Queens problem was proposed by Draa et al. <cit.>. Recent developments include the work of Codognet where the QUBO model was used to solve the N-Queens problem on the D-Wave quantum annealer <cit.>. We might wonder why there is such persistent effort to solve the challenging N-Queens problem. The reason lies in the diverse applications of the N-Queens problem and its variations across various domains. These applications encompass not only neural networks <cit.> algorithm benchmarking <cit.> and parallel memory storage schemes <cit.> but also extend to fields like VLSI (Very-Large-Scale Integration)design and testing <cit.>, AI and robotics <cit.>, and more. The versatility of the N-Queens problem makes it a valuable resource for addressing complex real-world challenges and advancing the state of the art in numerous areas of science and technology.§ PREVIOUS WORK Our solution builds upon the foundations laid by the work of Jha et al<cit.>, and we would like to acknowledge and credit this work throughout our paper. We will use notations and representations introduced in <cit.>. Additionally, we will introduce two new solutions inspired by the ideas presented in <cit.>. The authors of this work leveraged quantum properties, such as superposition and entanglement, to tackle the N-Queens problem. In their approach, they proposed a novel way of representing the positions of Queens on a chessboard using an N× N matrix. The squares are numbered from 1 in the top-left corner to N squared in the bottom-right corner, with numbering proceeding from left to right and top to bottom. This concept can be translated into a Quantum Circuit comprising N× N qubits, where each qubit is assigned a number from 1 to N× N. If a specific position on the chessboard is occupied by a Queen, the corresponding qubit is encoded in an excited state |1⟩. All other qubits remain in the ground state |0⟩. This is shown in Figure <ref>.Let's discuss their approach to solving the N-Queens problem.They decomposed this complex problem into three simpler criteria. A set of Queen positions satisfies all three criteria together, making it a potential solution to the N-Queens problem.Row Criteria: The row criteria is satisfied when there is exactly one queen at any position in every row of the chessboard. In quantum circuit terms, this means that exactly one qubit in every row should be in the excited state |1⟩.Column Criteria: The column criteria is met when there is exactly one queen at any position in every column of the chessboard. In the quantum circuit, this translates to exactly one qubit in every column being in the excited state |1⟩.Diagonal Criteria: The diagonal criteria present the most challenging aspect of the problem. The chessboard contains a total of (4N-2) diagonals. On each of these diagonals, there should be at most one queen, and it's acceptable for many diagonals to have no queens at all. In quantum circuit terms, this implies that on every diagonal, there should be a maximum of one qubit in the excited state |1⟩. Notably, a considerable number of diagonals might not have any qubit in the excited state |1⟩ at any of their positions. Let us now delve into how the authors of <cit.> have addressed these criteria in detail in the following subsections:§.§ Solving Row Criteria We begin by exploring all the possible setups for positioning queens on a chessboard, regardless of the number of queens. Each square on the chessboard can either have a queen placed on it or remain empty, resulting in two potential options for each square. Given that there are N × N squares on an N × N chessboard, there's a possibility of having 2^N^2 combinations for N queen placements.|00..0⟩1/√(2^N^2)∑_x=0^2^N^2-1|x⟩Let us introduce a condition: on an N × N chessboard, we aim to place exactly N queens. With this requirement in mind, the search scope narrows down to a combinatorial challenge with N × NN potential combinations. In a clever approach, the authors have devised a method to configure the quantum circuit, allowing the outcomes of measurements to span all the potential combinations within the search domain, satisfying the condition of rows without conflicts. For each group of N qubits, representing a single row on the chessboard, the authors have introduced the concept of implementing what they refer to as an N-Qubit W-State, represented in Eq. (<ref>).|W_N⟩=1/√(N)(|100…0⟩ + |010…0⟩ + … + |000…1⟩) |R⟩=|W_N⟩ = 1/√(N) [ |1⟩ + |2⟩ +..... |N⟩ ] = 1/√(N)∑_i=1^N|x_i⟩where |x⟩∈|100…0⟩ to |000…1⟩. Here, each vector |x_i⟩ ( i ∈{1,2,... N } ) represents all possible configurations of a particular row. When this process is applied to every row on the chessboard, it effectively ensures that the arrangement adheres to the row criterion without any conflicts. Taking |R_j⟩ as the state representing the jth row configurations, the tensor product of N of these states, represented as |Ψ_R⟩ is as follows:|Ψ_R⟩ = |R_1⟩⊗|R_2⟩⊗ .... |R_N⟩where |Ψ_R⟩ represents all the possible combinationsof N^2 qubit vectors, each of which fulfills the Row criterion. This reduces the subspace, taking it from the general subspace of 2^N^2 to N^N. However, it's essential to note that the reference paper <cit.> doesn't provide a precise step-by-step procedure for generating a W-State. Instead, the authors illustrate the algorithm using a 4 × 4 chessboard as an example. In this particular case, they employ a specialized technique to craft a W-State tailored to 4 qubits. It's crucial to understand that this technique cannot be universally extended to accommodate any arbitrary number of qubits. Despite this limitation, the algorithm retains its appeal due to its unique ability to set up the quantum circuit directly in a way that ensures the row criteria are met without the need for complicated post-processing.§.§ Solving Column Criteria After successfully getting all the configurations of placing N queens in N × N chessboard stored in Ψ_R that fulfills the row criteria, our exploration is now limited to N^N combinations. To handle and enhance the column criteria, a group of (N-1) ancillary qubits is introduced. These qubits act as indicators, signaling that if all these ancillary qubits are in the excited state |1⟩, the specific combination adheres to the column criterion. Let's point out the process step by step:* Apply a Hadamard (H) gate to each ancillary qubit. This transforms the ancillary qubits from their original Z-basis to the X-basis.* Consider qubit q_i,j which represents the quantum state |0⟩ or |1⟩ belonging to the i-th row and j-th column. For each j-th qubit present in all the rows, apply a controlled phase (CP) gate controlled by the j-th qubit and applied to j-th ancillary qubit. Repeat this process iteratively until j reaches (N-1), excluding the last qubit of every row. This process is depicted in Figure <ref>.* It's implicit that if every one of the (N-1) columns contains just a single qubit in the excited state |1⟩, the ultimate qubit naturally resides in the last column.* Implement another Hadamard (H) gate operation on all the ancillary qubits. The outcome of this step ensures that if a column contains multiple qubits in the excited state |1⟩ or none at all, the associated ancillary qubit remains in the ground state |0⟩. It does not matter if there are multiple qubits in the same column because only if all the ancillary qubits are in the excited state |1⟩, the column criteria is fulfilled. In effect, only when both the row and column criteria are met, all the ancillary qubits transition to the excited state |1⟩. This signifies the fulfillment of the required conditions. In this manner, the quantum circuit proficiently examines the interplay between the row and column criteria, pinpointing valid combinations that satisfy both sets of requirements and efficiently navigating the complex constraints of the N-Queens problem. §.§ Solving Diagonal Criteria After successfully fulfilling both the row and column criteria, our search space is further narrowed down to N! distinct states. For an N× N chessboard, the process involves using (N-1) ancillary qubits to verify column criteria and N(N-1)/2 ancillary qubits for diagonal criteria. The central concept here revolves around scrutinizing every possible pair of rows to identify the presence of excited qubits diagonally. We need to include only N(N-1)/2 ancillary qubits, because there are fewer number of diagonal checks required as the row and column criteria are already satisfied. Here's how the process unfolds:* Initialize each ancillary qubit to the excited state |1⟩.* The diagonal checks are executed using Toffoli gates, with each diagonal check corresponding to one Toffoli gate. For every pair of rows, select the relevant ancillary qubit associated with that pair.* For each pair of rows, identify the pairs of qubits situated diagonally from the chosen rows. Apply a Toffoli gate to the ancillary qubit which represents the corresponding diagonal which is being checked, controlled by these two pairs of diagonal qubits. Repeat this procedure for all pairs of diagonal qubits stemming from the selected rows.* Iterate through every pair of rows, performing the same Toffoli gate operation on the corresponding ancillary qubit representing the current diagonal under check. If any diagonally placed pair of qubits is excited, it will transition the related ancillary qubit back to the ground state |0⟩. Following these operations, if all the ancillary qubits designated for diagonal criteria remain in the excited state |1⟩, then the states that meet this condition are the ones that satisfy the diagonal criteria. In summary, if both the (N-1) ancillary qubits for column criteria and the N(N-1)/2 ancillary qubits for diagonal criteria remain in the excited state |1⟩, the resulting states—excluding the ancillary qubits themselves represent the solutions to the N-Queens problem. These states signify the valid positions for placing N queens on an N× N chessboard, ensuring that no two queens threaten each other. § IMPROVEMENTS TO THE PREVIOUS SOLUTION In this work we have improved upon the complexity of the existing algorithm by including efficient preparation of W_N-states and introducing the use of dynamic circuits.§.§ Generalized W_N States The concept of using W_N states to initialize a quantum circuit in order to satisfy the column criteria was previously introduced. This was exemplified with a 4× 4 chessboard, which necessitates the creation of a 4-qubit W-state, a relatively straightforward task. However, the methods employed for preparing 4-qubit W-states are not easily adaptable for generating W-states for varying numbers of qubits. Subsequently, it was realized that creating W-states for 2K qubits, where K is a whole number, is comparatively simpler than devising methods for arbitrary whole numbers. As a solution, a more comprehensive approach was discovered in Cruz's work <cit.>, detailing a generalized method for preparing N-qubit W-states, where N can be any whole number. To make the algorithm more practical and extensible, the aforementioned method for generating W-states for any number of qubits was integrated.Furthermore, there was an advancement in the process of filtering states that satisfy both row and column criteria. In the previous paper, this involved transitioning the ancillary qubits from the Z-axis to align with the X-axis, then executing controlled phase flip operations on these ancillary qubits, controlled by the qubits of the corresponding column. The ancillary qubits would then be switched back to the Z-axis configuration. This improvement in the approach to satisfying the column criteria refined the algorithm and enhanced its practical applicability.§.§ Removing the Phase operation from the Column criteria An alternative approach involves using a controlled bit flip gate instead of a controlled phase flip gate. This change eliminates the need to switch between the Y-axis and Z-axis configurations. This alteration decreases the number of gates by 2 for each ancillary qubit dedicated to column criteria. Additionally, the controlled bit flip gate is simpler to implement and offers improved noise reduction in real quantum computers. In the Qiskit framework, we implemented a controlled bit flip gate using the CX-gate (CNOT gate), and a controlled phase flip gate using the CZ-gate. §.§ Introducing Dynamic Circuits When the previous paper was proposed, dynamic circuit features were not yet introduced in any of the quantum simulation frameworks. However, today, almost every quantum computing simulation framework supports dynamic circuits. They offer a powerful way to leverage quantum computing capabilities for various applications, allowing exploration of quantum states and behaviors that might not be achievable using static circuits. However, working with dynamic circuits does present challenges, including efficient parameter optimization and managing complexity as circuits become more adaptive. Table <ref> presents a summary of the introduction years and versions for various quantum computing frameworks that support dynamic circuits:Here, we have utilized dynamic circuits in a more clever manner. We've designed the circuits such that only those states which satisfy the column criteria proceed to construct the gates needed to check the diagonal criteria. As a result, only N! states are used to build the circuit for verifying the diagonal criteria. All other states lead to the result |0⟩, since no measurement is performed for those trivial states. By incorporating dynamic circuits, we significantly reduce both the time required and the complexity of the gates. This reduction stems from the elimination of time and operations spent on trivial states. This addition serves as a substantial improvement by effectively harnessing the advantages of dynamic circuits. § NOVEL ALGORITHMSWe have introduced two new and innovative algorithms for solving the N-Queens problem. They employ the same representation as introduced in the previous paper. We have provided comprehensive explanations for both of these algorithms in the following sections, along with relevant illustrations and examples. Sections <ref> and <ref> delve into the details of these novel algorithms, showcasing their unique approaches to addressing the N-Queens problem. §.§ Direct Column AlgorithmThis quantum algorithm addresses both the row criteria and column criteria concurrently. It starts by configuring the quantum circuit with combinations that satisfy both these conditions. This strategic initialization results in a substantial decrease in the search space, ultimately reaching N! combinations. Remarkably, this significant reduction is achieved without the requirement for additional ancillary qubits to verify the column conditions, a necessity in previous algorithms. Following this accomplishment, the next step entails applying the same procedure for assessing the diagonal criteria, involving the utilization of ancillary qubits to check diagonals with the implementation of dynamic circuits to give only the possible configuration states. However, it's important to note an important drawback associated with this approach: as the number of input queens increases, the complexity of the quantum circuit grows exponentially. This expansion in complexity is directly linked to the quantity of input queens and can pose challenges in terms of scalability and implementation. While this approach provides a remarkable reduction in the search space and simplifies the solution process by not requiring ancillary qubits for column checks, the trade-off lies in the exponential growth of circuit complexity. This algorithm shines for smaller instances of the N-Queens problem, but its feasibility diminishes as the problem size increases due to the rapidly escalating quantum circuit complexity.§.§ Quantum Backtracking Algorithm The inspiration from the well-known classical backtracking algorithm is evident in the naming of this algorithm. This quantum version closely emulates the principles of the backtracking algorithm but within the context of a quantum circuit. However, the results obtained here encompass not just a set of solutions but also non-solution states. Filtering these non-solution states requires only a single ancillary qubit. If this ancillary qubit is in the excited state |1⟩, the corresponding combination is included in the solution set. If progression to the subsequent row becomes impossible due to disregarding all qubits, the algorithm employs backtracking by returning to a qubit in the same row that was previously considered and involved in creating a collective W-State. The potential solution combinations are those wherein a Toffoli gate can activate a qubit from the last row. To validate this, a single ancillary qubit is utilized, and N controlled-NOT gates are applied. Each gate is controlled by a qubit from the last row. To manage the arrangement of gates, both a classical stack and a quantum stack are required. The use of classical computing resources becomes necessary for this computational aspect. § CONCLUSIONIn conclusion we have demonstrated the mapping of an N-Queens problem into a quantum computer. Improvement is done based on previous works by defining the exact structure of the state preparation circuit of W-state and the inclusion of dynamic circuits to further reduce circuit complexity. Only valid solution states are measured, while other trivial states are cancelled out by the dynamic circuit. Two novel algorithms are presented that are inspired from the present work, one being the controlled W-state and the other one based on existing techniques like the backtracking algorithm which introduces a novel approach that holds value beyond its immediate application. Although it incorporates classical computational elements, it underscores the potential to use quantum algorithms to solve the N-Queens problem. Hence future work on showing decreased time complexity for this problem using qubit-mapping is a justified prediction. Overall, this algorithm delineates a distinctive approach influenced by classical backtracking techniques, forming a bridge between classical and quantum computing realms which will pave way for the development of more efficient ways to solve the N-Queens, hence offering valuable insights for further exploration in this dynamic field.ieeetr | http://arxiv.org/abs/2312.16312v1 | {
"authors": [
"Santhosh G S",
"Piyush Joshi",
"Ayan Barui",
"Prasanta K. Panigrahi"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231226194205",
"title": "A Quantum Approach to solve N-Queens Problem"
} |
Lyapunov-Krasovskii Functionals ofRobust Typefor the Stability Analysis in Time-Delay Systems footnoteinfo [ January 14, 2024 ================================================================================================================In this paper, we propose an efficient and accurate streaming speech recognition model based on the FastConformer architecture. We adapted the FastConformer architecture for streaming applications through: (1) constraining both the look-ahead and past contexts in the encoder, and (2) introducing an activation caching mechanism to enable the non-autoregressive encoder to operate autoregressively during inference. The proposed model is thoughtfully designed in a way to eliminate the accuracy disparity between the train and inference time which is common for many streaming models. Furthermore, our proposed encoder works with various decoder configurations including Connectionist Temporal Classification (CTC) and RNN-Transducer (RNNT) decoders. Additionally, we introduced a hybrid CTC/RNNT architecture which utilizes a shared encoder with both a CTC and RNNT decoder to boost the accuracy and save computation.We evaluate the proposed model on LibriSpeech dataset and a multi-domain large scale dataset and demonstrate that it can achieve better accuracy with lower latency and inference time compared to a conventional buffered streaming model baseline. We also showed that training a model with multiple latencies can achieve better accuracy than single latency models while it enables us to support multiple latencies with a single model. Our experiments also showed the hybrid architecture would not only speedup the convergence of the CTC decoder but also improves the accuracy of streaming models compared to single decoder models.Streaming ASR, FastConformer, Conformer, CTC, RNNT§ INTRODUCTIONMany of the traditional end-to-end streaming automatic speech recognition (ASR) models use auto-regressive RNN-based architectures <cit.> as we don't have access to all the future speech in streaming mode. Offline ASR models can potentially use the global context while streaming ASR models need to use a limited future context which degrades their accuracy compared to offline models. In some streaming approaches, offline models are being used for streaming which would be another source of accuracy degradation as there is inconsistency between offline training and streaming inference. The accuracy gap between streaming and offline models can be reduced by using large overlapping buffers where left and right contexts are added to each chunk of audio, however, this requires significant redundant computations for overlapping segments. In this paper, we propose an efficient and accurate streaming model based on the FastConformer <cit.> architecture which is a more efficient variant of Conformer<cit.>. Our proposed approach would work with both the Conformer and FastConformer architectures but we performed our experiments with just FastConformer as it more than 2X faster than Conformer. We also introduced a hybrid CTC/RNNT architecture with two decoders of CTC <cit.> and RNNT <cit.> with a shared encoder. It would not only saves computation as a single model is trained instead of two separate models but also improves the accuracy and convergence speed of the CTC decoder.We propose a caching mechanism by converting the FastConformer's non-autoregressive encoder into an autoregressive recurrent model during inference using a cache for activations computed from previous timesteps. A cache stores the intermediate activations which are reused in future steps. The caching removes the requirement of any buffer or overlapping chunks which results in avoiding any unnecessary duplicate computations. It drastically reduces the computation cost when compared to traditional buffer-based methods. Note that the model is still trained efficiently in non-autoregressive mode, similar to offline models. The model has also limited right and left contexts during training to maintain consistent conditions during training and streaming inference. This consistency helps to reduce the accuracy gap between offline inference and streaming inference significantly. Additionally, as the changes are limited to the encoder architecture, the proposed approach works for both FastConformer-CTC and FastConformer-Transducer (FastConformer-T) models. We evaluate the proposed streaming model on the LibriSpeech dataset and a large multi-domain dataset and show that it outperforms the buffered streaming approaches in terms of accuracy, latency, and inference time. We also study the effect of the right context on the trade-off between latency and accuracy. In another experiment, we would evaluate a model trained with multiple latencies which can support multiple latencies in a single model. In our experiments, we show that it can achieve better accuracy than models trained with single latency. Additionally we show that our hybrid architecture can achieve better accuracy compared to single decoder models with less compute. All the code and models used in the paper including the training and inference scripts are open-sourced in NeMo <cit.> toolkit[https://github.com/NVIDIA/NeMo]. § RELATED WORKSThere are a number of approaches that use limited future context in streaming models. The time-restricted methods in <cit.> use masking in each layer to allow a limited look-ahead for each output token. However, these methods are not computationally efficient since the computations for look-ahead tokens are discarded and they need to be recomputed again for future steps. Another approach is based on splitting the input audio into several chunks. Each output token corresponding to a chunk has access to all input tokens in the current chunk as well as a limited number of previous chunks. This approach is more efficient and accurate than the time-restricted method <cit.>.Some memory-based approaches <cit.> use contextual memory to summarize older chunks into a vector to be used in the subsequent chunks. For example, a streaming Transformer modelin <cit.> with an attention-based encoder/decoder (DEA) architecture uses a context embedding to maintain some memory state between consecutive chunks. Generally, these techniques are computationally efficient for inference, but they usually break the parallel nature of training, resulting in less robust and efficient training. There exist a number of previous works that adopted Conformer for streaming ASR <cit.>. In <cit.>, the authors have developed a unified model which can work in both streaming and non-streaming modes. Yao et al. <cit.> proposed a streaming Conformer that uses a Transformer decoder. Their model supports dynamic look-ahead by training the model with different look-ahead sizes.§ CACHE-AWARE STREAMING FASTCONFORMER In our proposed cache-aware streaming FastConformer, the left and right contexts of each audio step are controlled and limited. It enables us to have consistent behavior during both training and inference. The proposed model is trained in an efficient non-autoregressive manner, but inference is done in an autoregressive, recurrent way. The original FastConformer encoder consists of self-attention, convolutions, and linear layers. The linear layers and 1D convolutions with kernel size of one do not need any context because their outputs in each step are just dependent on that step. However, self-attention and convolutions with kernel size larger than one need context, and we need to limit the context for these specific layers to control the context size of the whole model. §.§ Model training We modify the FastConformer model as follows to adapt it for the streaming scenario. We avoid using normalization in the mel-spectrogram feature extraction step as the normalization procedure need to use of some statistics which depend on the entire input audio. We make all the convolution layers including those in the downsampling layers fully causal. For this purpose, we use padding of size k-1 to the left of the input sequence where k is the convolution kernel size, and zero padding for the right side. From now onwards, we will drop the “downsampled" prefix and simply refer to the “downsampled input" as “input". We replace all the batch normalization <cit.> layers with layer normalization <cit.> as the former computes mean and variance statistics from the entire input sequence whereas the latter normalizes each step of the input sequence independently. There are three approaches to limit the size of the context for self-attention layers:Zero look-ahead: Zero look-ahead means each step in the sequence has access only to previous tokens (either all past tokens or only a subset of them). This is crucial for low-latency applications. Therefore, all modules need to be causal including the self-attention layers. We use masking to ignore the contribution of all future tokens in the attention score computation. It results in a small latency and inference time but lower prediction accuracy.Regular look-ahead: It has been shown that having access to some future time steps, i.e. limited look-ahead, can significantly improve the accuracy of an ASR model <cit.>. The simplest approach is to allow a small look-ahead in each self-attention layer <cit.>. Since layers are stacked, the effective look-ahead gets multiplied by the number of self-attention layers as we move deeper in the network as shown in Figure <ref>(a). For example, in a model with N self-attention layers, where each one has a look-ahead of M, the effective look-ahead of each output token over the input sequence is M × N. The effective look-ahead directly impacts the final latency since the model needs to wait for M× N time steps before it can make any prediction. The past context in this approach can be any number of tokens. But allowing larger past context will increase the computation time for each streaming step. Chunk-aware look-ahead: There are two disadvantages to the regular look-ahead. First, the effective overall look-ahead depends on the number of layers having non-zero look-ahead. Thus, the latency can be significant if we use look-ahead in each layer. Even for a reasonably large latency budget, we can only use a small look-ahead size in each layer (which we denote M). For example, for a model with 17 layers, a frame rate of 10ms, and subsampling factor of 4, choosing look-ahead size M=2 results in a latency of 10×4×17×2=1360 milliseconds. Therefore, M cannot be much larger for practical applications <cit.>.The other disadvantage of regular look-ahead is the unnecessary re-computation of some tokens during the streaming inference. For example, to compute f_k in figure <ref>(a), the self-attention operation is applied on blue tokens with a query size of M+1, 1 for current timestep k and M for future tokens (more details in section <ref>). This generates the gray-shaded token f_k+1 along with the desired output f_k shown in yellow. But we drop f_k+1 generated in this step as it is not correct due to its dependency on h_k+3, which is not available yet. Therefore, we need to recompute f_k+1 along with other such tokens across different layers.Chunk-aware look-ahead <cit.> addresses both the above issues. It splits the input audio into chunks of size C.Tokens in a chunk have access to all other tokens in the same chunk as well as those belonging to a limited number of previous chunks. In contrast with the effective look-ahead growing with depth in regular look-ahead, there is no such dependency in chunk-aware look-ahead.Due to chunking, the output predictions of the encoder for all the tokens in each chunk will be valid and there is no need to recompute any activation for the future tokens. This results in zero duplication in the compute and makes the inference efficient. While the look-ahead of each token is the same for regular look-ahead by construction, it varies in the range [0, C-1] for the chunk-aware case. The leftmost token in a chunk has the maximum look-ahead with access to all the future tokens in the chunk whereas the last token has the least look-ahead with access to zero future tokens. The average look-ahead for any token in chunk-aware look-ahead is larger than the regular look-ahead which leads to better accuracy with the same latency budget. §.§ Hybrid architectureWe used a hybrid architecture which uses two decoders, one CTC decoder and one RNNT decoder train our models. Both decoders share a single encoder. After training is done, any of the two decoders can be used for inference. The hybrid architecture has the following advantages over single decoder models: 1) no need to train two separate models and saves significant compute in our experiments as we did all of experiments for both the CTC and RNNT, 2) speeds up the convergence of the CTC decoder significantly which is generally slower than RNNT decoders, and 3) improves the accuracy of both the decoders likely due to the joint training. During the training the losses of the CTC decoder (l_ctc) and RNNT decoder (l_rnnt) are mixed with a weighted summation as the following: l_total = α * l_ctc + l_rnnt where l_total is the total loss to get optimized, and α is the hyperparameter to control the balance between these two losses. §.§ Inference with caching In streaming inference, we process the input in chunks. Using a larger chunk size results in higher latency but requires fewer calls to the forward pass through the model. We use chunk size C=M+1, where M is the look-ahead. However, the chunks are overlapping with stride of 1 in regular look-ahead compared to stride of M+1 with no overlap for chunk-aware look-ahead. The straightforward approach to process chunks is to pass each chunk along with the effective past context. However, this approach is very inefficient as there is a huge overlap in the computation of past context. We propose a caching approach to avoid these recomputations and have zero duplication in streaming inference. Normalization, feedforward, and pointwise convolution layers do not need caching as they do not require any context. However, self-attention and depth-wise convolution with a kernel size greater than 1 do depend on past context. Therefore, caching intermediate activations from the processing of previous chunks can lead to a more efficient inference. For each causal 1D depthwise convolution with kernel size K, we use a cache of size C_conv = K-1. This cache contains the activations of the last C_conv steps from the previous chunks. Initially, the cache is filled with zeros for the first chunk. It gets updated at each streaming step as shown in Figure <ref>. The cache is filled with the g_k-3, g_k-2, g_k-1 outputs of the layer below from the previous streaming step. In the current step, outputs g_k, g_k+1, g_k+2 from the layer below would be used to overwrite the previous values in that part of the cache. The updated cache therefore contains g_k+1, g_k+2, g_k+3 to be used in the next streaming step. Given a batch size of B and a model with L depth-wise convolutions layers, each having hidden size of D, we require a cache matrix of size L× B ×D× C_conv. Each layer updates the cache matrix by storing the necessary activations in each streaming step.Unlike the fixed-length cache for convolution layers, the cache size for a self-attention layer grows from zero up to the past context size. For self-attention layers with left context of L_c, the cache is empty in the first streaming step. With every streaming step, chunk size number of activations from the input to self-attention layer is added to the cache and any extra old values are dropped. Eventually, the cache grows to its full size and contains the last L_c activations only. For example, in figure <ref>, the cache for self-attention contains only three values h_k-3, h_k-2, h_k-1 since initially the cache was empty and got updated with chunk size of three elements from previous streaming step. At the end of this step, h_k, h_k+1, h_k+2 are added to cache which would make cache size 6. Therefore, the two oldest values h_k-3, h_k-2 are dropped to maintain maximum cache size of L_c, here four, to be used in the next streaming step as shown on the right. For a batch size of B, a model with Lself-attention layers, each having hidden size of D, requires a cache matrix of size L × B× C_mha× D where 0 ≤ C_mha≤ L_c. The downsampling module uses striding convolutions and can also benefit from caching. However, due to a small kernel size (typically 3), it would be a small cache and can be ignored. Instead, we can simply concatenate the last log(D_r)*2+1 mel-spectrogram feature frames to each chunk where D_r is the downsampling rate. The decoder of FastConformer-CTC is stateless while the RNN-T decoder consists of RNN layers with states. Therefore, for FastConformer-T, all the hidden states of RNN layers need to be stored after each streaming step. In the next step, these cached states are used to initialize all RNN layers. By maintaining such caches, the prediction of the network would be exactly the same as when the entire audio is processed in a single step.§ EXPERIMENTSWe evaluated our proposed streaming approach with hybrid architecture of FastConformer <cit.>. All the results are reported for both the the CTC and RNNT decoders which are denoted as FastConformer-CTC and FastConformer-T respectively. The parameter α of the hybrid loss is set to 0.3 as it showed the best performance in our experiments. We performed all the experiments on the models which have ≈114M parameters. We followed the same configuration used in <cit.>. Experiments are done on two datasets: 1) LibriSpeech (LS) with speed perturbation of 10% <cit.>, and 2) NeMo ASRSET 3.0. NeMo ASRSET is a large multi-domain dataset which is a collection of some publicly available speech datasets with total size of 26K hours. All models are trained for at least 200 epochs with effective batch size of 2048 for LibriSpeech and 4096 for NeMo ASRSET 3.0. SentencePiece <cit.> with byte pair encoding (BPE) is used as the tokenizer with vocab size of 1024, and they are trained on the train set of each training dataset. We trained the models with AdamW optimizer <cit.> with weight decay of 0.001 and Noam scheduler <cit.> with coefficients of 5.0. We used checkpoint averaging of the best five checkpoints based on the WER of the validation sets to get the final models. Mixed precision training with FP16 <cit.> is used for most of the experiments to speed up the training process. All the average latencies in this paper are referring to algorithmic latency induced by the encoder (EIL) introduced in <cit.>. It is calculated as the average time needed for each word to get predicted by the model while ignoring the inference time of the neural network.We used FastEmit <cit.> for the RNNT loss with λ of 0.005 to prevent the model from delaying the predictions. FastEmit showed to be very effective and crucial to improve the accuracy of the streaming models for both the RNNT and CTC decoders. This positive cross-decoder effect on the CTC decoder is another advantage of the hybrid architecture. §.§ Streaming vs offline models In this experiment, we compare different cache-aware streaming models with offline models and buffered streaming. All models are trained on LS and the results of the evaluations on the test-other set of the LS are reported in Table <ref>. The offline models are trained with unlimited context over the entire audio. We evaluated and reported the performance of these models in both full context as well as buffered streaming mode. We use the buffered streaming solution as a baseline which can be used for streaming inference with models trained in full-context (offline) mode. In this approach, the input is passed chunk-by-chunk but in order to get reasonable results at the borders, we add some of the past and future audios as context to each chunk. The total audio including the chunk and its contexts is stored in a buffer. The contexts would result in re-computation and waste of compute. In the experiments for buffered streaming, we used a chunk size of 1 second and 2 seconds for the CTC and RNNT respectively with buffer sizes of 4 seconds. The results for the regular and chunk-aware streaming models are selected from the models with average latency of 1360ms. Cache-aware models show significantly better accuracy with lower latency while using less computation compared with the buffered approach. While some contexts are added to each chunk in buffered streaming, not that much duplication is needed for chunk-aware streaming models. It makes the cache-aware streaming models significantly faster than buffered streaming models. This speed gap can be significantly higher with a larger buffer size or smaller chunk size. As it can be seen, the accuracy of the buffered streaming for RNNT models is not as good as the CTC decoders while they use even larger chunk size. Additionally, in our experiments the performance of the buffered RNNT was not robust to the buffer and chunk size parameters, while cache-aware models were more robust and showed better accuracy with lower latency.Moreover, our streaming models show smaller accuracy degradation from the offline model compared to buffered streaming. The accuracy of our streaming model is exactly the same when evaluated in offline and streaming modes as training and evaluation have the same limited contexts. There is inconsistency between the contexts available during training and inference for the buffered approach. Due to the caching mechanism, the total computation for offline inference and streaming inference is also the same for chunk-aware approach. §.§ Effect of look-ahead size on accuracyWe evaluated the effect of different look-ahead sizes on the accuracy of the proposed streaming models on LibriSpeech dataset. The WERs on test-other set of LS for six different lengths of look-ahead are shown in Table <ref> for regular and chunk-aware approaches. One of the disadvantages of the regular look-ahead over the chunk-aware is that not any look-ahead size is feasible with regular look-ahead. For FastConformer models which has 8X downsampling and window shift of the mel spectogram input is 10ms, even one token of look-ahead would translate into 8*10*17=1360ms of look-ahead considering all the evaluated models have 17 layers.Results show that chunk-aware look-ahead are better than regular look-ahead in term of accuracy with the same latency. Additionally, it can be seen that larger look-ahead significantly improves the accuracy of both approaches, which shows the importance of look-ahead for better accuracy with the sacrifice of latency. The average of latency for each case is half of the look-ahead size.In the same Table <ref>, we also reported the accuracy of the same models with chunk-aware approach trained with non-hybrid architecture to show the effectiveness of the hybrid architecture for streaming models. As it can be seen, the hybrid variants demonstrate better accuracy compared to non-hybrid ones. §.§ Large scale multi-domain trainingTo evaluate the effectiveness of our proposed approach, we evaluated the chunk-aware model on a large multi-domain dataset (NeMo ASRSET 3.0). More detail on this dataset can be found in <cit.>.The accuracy of both the cache-aware FastConformer-CTC and FastConformer-T evaluated on a collection of evaluation sets are reported in Table <ref>. As expected results are similar to the experiments on the LibriSpeech, higher latency would result in higher accuracy, and RNNT-based models are better than their equivalent CTC. As it can be seen, the cache-aware streaming models outperform the buffered streaming models on all benchmarks. §.§ Multiple look-ahead trainingOne of the disadvantages of the cache-aware streaming comparing to buffered streaming is that each model is trained for a specific latency and supporting multiple latencies need training of multiple models. In order to address this shortcoming, we proposed to train the streaming model with multiple latencies. For each batch on each GPU, we randomly select a chunk size and it makes the model to support different latencies.To evaluate the proposed approach, we trained a chunk-aware model with multiple latencies and compared the averaged accuracy on all benchmarks to models trained with single latency. The benchmarks are the same as the ones used in Table <ref>. The multi-lookahead model may need more steps to achieve the same accuracy as training a single latency model. The results are reported in Table <ref> for both the CTC and RNNT decoders. The multi-lookahead model even shows better accuracy than single lookahead models while just one model is trained for multiple latencies. The training on multiple look-aheads have helped the model to become more robust and even achieve better accuracy in some cases.§ CONCLUSIONWe proposed a streaming ASR model based on FastConformer where the non-autoregressive encoder is converted into an autoregressive recurrent model during inference. It is done by using an activation cache to keep the intermediate activations that are reused in future steps. The caching drastically reduces computation cost when compared to traditional buffer-based methods while the model is still trained in non-autoregressive mode. We evaluated our proposed model on LibriSpeech and a large multi-domain dataset and showed that the proposed model outperforms buffered streaming in terms of accuracy, inference time, and latency. We also introduced a hybrid CTC/RNNT architecture to train the streaming models which not only saved compute but also improved the accuracy. Additionally our experiments showed that a model trained with multiple latencies can achieve even better accuracy than models trained with single latency. IEEEbib | http://arxiv.org/abs/2312.17279v1 | {
"authors": [
"Vahid Noroozi",
"Somshubra Majumdar",
"Ankur Kumar",
"Jagadeesh Balam",
"Boris Ginsburg"
],
"categories": [
"cs.CL",
"eess.AS"
],
"primary_category": "cs.CL",
"published": "20231227210426",
"title": "Stateful FastConformer with Cache-based Inference for Streaming Automatic Speech Recognition"
} |
Facultad de Ciencias, Universidad de la República, Igua 4225, Montevideo 11400, UruguayTransient dynamics in viscoelastic fluids exhibits notable difference with their Newtonian counterpart. In this work we study the changes in the formation and decay of vortex of viscoelastic fluids due to degradation caused by shear stress. With this aim we performed long duration experiments with solutions of polyacrylamide confined between coaxial cylinders.The azimuthal velocity, acquired through digital particle velocimetry, shows oscillations before reaching a steady state. The vortex development is identified by the overshoot, which represents the difference between the maximum velocity and the stationary velocity. Similarly, during the vortex decay, the azimuthal velocity changes direction, and the relevant parameter is the undershoot, representing the maximum velocity of the transient reverse flow. The experimental results show thatovershoot and undershoot are closely related to the changes in the viscoelastic properties caused by the mechanical degradation. This effect opens new perspectives in the characterization of the viscoelastic fluids.The effect of mechanical degradation onvortex formation and decay in viscoelastic fluids Renzo Guido, Luis G. Sarasua, Arturo C. Martí January 14, 2024 ==========================================================================================§ INTRODUCTIONFlows induced by viscoelastic fluids hold significant academic and practical importance due to their involvement in numerous industrial and technological applications <cit.>. Unlike Newtonian fluids, the rheological properties of polymer solutions are determined by a larger number of parameters <cit.>. Furthermore, polymeric fluids are subject to the phenomenon of degradation, an irreversible transformation resulting from chemical or physical mechanisms <cit.>. Mechanical degradation, specifically, occurs when the applied force exceeds the polymer chain's limit. Consequently, the chain breaks down leading to the formation of new polymers with reduced molecular weights. These alterations in the polymer structure engender changes in the rheological properties of viscoelastic fluids. In this study, we investigate the impact of mechanical stress on the transient dynamics of polymeric fluids. To accomplish this, we conducted long-duration experiments utilizing polyacrylamide solutions confined between coaxial cylinders, with a specific focus on analyzing the transient effects during the initiation and decay of vortical flows. The transient dynamics of viscoelastics flows exhibits interesting contrasts with their Newtonian counterparts, behaving sometimes as expected and sometimes in a contra-intuitive way <cit.>.In our previous research <cit.>,experimentalresults in cylindrical geometries revealed aoscillatory behavior of the azimuthal velocity, acquired through the utilization of digital particle velocimetry, before reaching the stationary state. Inspired bycontroltheory <cit.>, this phenomenon is characterizedby means of overshoot parameter, also known as maximum peak, which quantifies the relative difference between the maximum velocity and the stationary velocity. Similarly, during the vortex decay, the azimuthal velocity undergoes a reversal in direction, and the pertinent parameter is referred to as undershoot, denoting the maximum negative velocity of the transient reverse flow.The experimental results reported in <cit.> revealed that the overshoot and undershoot variations are influenced by the elastic component and solvent viscosity. In the present work we show that the transient vortex dynamics in viscoelastic fluids can be related to mechanical degradation and present a model linking these phenomena. The rest of the paper is organized as follows.In the next Section we present the experimental setup. Section <ref> introduces the experimental results which are discussed inSection <ref>. Finally, in Section <ref> we present the conclusion.§ EXPERIMENTAL SETUP AND WORKING FLUIDThe experimental setup, schematized in Fig. <ref>, comprises two concentric cylinders. The outer cylinder (D = 83.30 ± 0.05 mm) remains stationary, while the inner cylinder (d = 9.50 ± 0.05 mm) is capable of rotation. The rotational velocity Ωof the inner cylinder is controlled by a direct current (dc) motor, capable of achieving velocities of up to 6.5 rad/s. The height of the fluid column is denoted as h_1 = 90 mm, and the laser sheet, positioned horizontally, is located at h_2 = 70 mm. Thebottomlid is fixed, and the upper surface of the fluid is free, exposed to atmospheric pressure. The fluid employed in this study was a shear-thinning solution of polyacrylamide (92560-50G Sigma-Aldrich)in a mixture of water (70%) and glycerin (30%). The water, glycerin and polyacrylamide where mixed for 14 days with a magnetic stirrer Thermo Scientific (RT Basic-12) until the mixture was uniform. It is important to highlight that the properties of this mixture are not solely dependent on its composition but also influenced by the agitation time of the polyacrylamide. §.§ Degradation process and fluid characterizationTo investigate the effects of fluid degradation on its transient dynamics in our experiments, the fluid underwent additional agitation in the magnetic stirrer for 21 days. During this time, at days 5, 14 and 21, the fluid was placed in the cylindrical container to perform the start-up and decay of the rotating flow experiments. Samples of the fluid were taken at each stage to characterization, to measure the fluid properties changes due to degradation.The rheological properties of the samples were assessed using an Anton Paar Physica MCR 301 rheometer at a temperature of 20^∘C. Table <ref> presents the comprehensive summary of the fluid characteristics.The sample subjected to a degradation period of 5 days was not characterized, however, subsequent analysis demonstrates that its response closely resembles that of the original sample. To characterize the rheological behaviour of each sample, theCarrau model, expressed as η (γ̇) = η_0[1+(λγ̇)^2]^(n-1)/2, was employed. Byfitting the data to this model, we obtained thecharacteristic time λ and the zero-shear rateviscosity η_0 for each sample. The Deborah number,defined as De = λ U / R, where R represents theradius of the inner cylinder and U denotes the velocity atits surface (thus De = λΩ), was computed.Additionally, the elasticity number, defined as E = De/Re =λν_0/R^2, which is independent of the angularvelocity, was calculated. Notably, both λ andμ_0 exhibited a decrease with mechanical degradation,attributed to the reduction in average chain length,consequently resulting in diminished elastic numbers.The acquisition of experimental velocity fields involved the introduction of neutrally buoyant, nearly-spherical polyamide particles (Dantec Dynamics, Denmark) into the viscoelastic fluid. These particles, with an average diameter of 50 μm, possessed a Stokes number of approximately 10^-6, indicating their ability to closely trace the flow motion. The recording of the flow dynamics was accomplished through the utilization of a digital camera (Pixelink, model PL-B7760) operating at a frame rate of 15 fps, facilitated by a mirror setup. The illumination source employed was a green laser (LaserGlow, model LSR-0532-PFH-005500-05n), with the particles reflecting its light as they followed the fluid motion.For the measurement of the velocity field in Cartesian coordinates, the Digital Particle Imaging Velocimetry (DPIV) technique was employed. To determine the azimuthal component v_θ, each grid point was categorized based on its distance from the rotation axis, partitioned into rings with a thickness of 2.05 mm. At each time instance, the average azimuthal velocity and its corresponding standard deviation were computed, yielding v_θ = v_θ(r,t) as depicted in Fig. <ref>. The procedure in each experiment is as follows. The fluid is placed in the cylindrical container and allowed to stand for a reasonable period of time. The motor that controls the inner cylinder is started and quickly reaches a steady rotational speed denoted as Ω. This sudden change in rotational velocity served as a step input. Subsequently, the fluid underwent the development of a vortex flow, characterized by a peak azimuthal velocity, followed by a gradual decay towards a stationary velocity field. After the fluid has developed the vortex flow we allow a short interval to elapse. Then, the motor is stopped and the inner cylinder ceased its rotation abruptly, leading to the decay of the vortex flow.We observe that the angular velocity of the fluid is decreasing but when it reaches the zero value instead of stopping completely changes the direction of rotation and for a few moments rotates in the opposite direction to the initial one. The time evolution of the velocity during this whole process is shown in the Fig. <ref>. This experimental procedure was systematically repeated, with each subsequent measurement featuring an increment in Ω, ranging from 1.9 to 5.6 rad/s. § RESULTSConsistent with our prior investigations, we conducted an analysis of the transient behavior in terms of the overshoot and undershootparameters. The overshoot, or maximum peak, was quantified using the expression M_p = (v_θmax - v_θst )/v_θst. It represents the relative difference between the maximum azimuthal velocity attained during the development of the vortex flow and the steady-state azimuthal velocity. Additionally, we assessed the undershoot, or, maximum negative peak, denoted as M_n p = - v_θmin / v_θst. This parameter captures the maximum negative value of the azimuthal velocityrelative to the steady-state azimuthal velocity. Importantly, these parameters can be computed for each radial position and across various angular velocities, Ω, employed in our experiments.In Fig. <ref>, the relationships between the maximum, minimum, and steady-state azimuthal velocities with the rotational velocity of the inner cylinder, Ω, are depicted. The velocities exhibit an overall increase in magnitude as Ω increases. Additionally, the dimensionless parameters derived from the velocity data show that the overshoot parameter, M_p, becomes larger with increasing Ω, indicating a greater deviation from the steady-state azimuthal velocity. In contrast, the undershoot parameter, M_n p, exhibits a negative dependence on Ω, with its magnitude decreasing as the rotational velocity increases. These findings provide valuable insights into the complex flow behavior observed in the experimental system.Next, we analyzethe maximum, stationary and minimum velocitiesrespectively for the fluid samples with four different degrees of degradation displayed in Figs. <ref>, <ref> and <ref>. The maximum velocity seems to be the same despite the degradation of the viscoelastic fluid, which can be interpreted as all samples exhibit initially the same response to the abrupt rotating start of theinner cylinder. This behaviour is notorious because the maximum peak is due to the elastic effects of the fluid, and although the elastic number decreases with degradation, the maximum velocity does not change considerably.On the other side, in Figs. <ref> and <ref> a notorious difference can be seen in both the stationary and minimum velocity for the different degrees of degradation. It can be noticed that the fluids response to 14 and 21 days of degradation is notorious, where increasing degradation increases stationary velocity. Nevertheless, the fluid response experiment almost no change after only five days of degradation.For the minimum velocity we found that the original fluid had a larger minimum velocity (in absolute value) compared to the others, despite the fact that it had a lower stationary velocity. § DISCUSSIONTo gain an understanding of vortex formation and decay, we calculated the previously defined dimensionless parameters as a function of the radial coordinate randthe rotational speed of the inner cylinder Ω. The formation of the vortex is summarized in the overshoot parameter and we show it as a function of these parameters in Fig. <ref>. As can be seen in the figure, the degradation shows a negative relation with the overshoot, and the original fluid and the 5-day degradation are almost identical. As expected, M_p increases with Ω andr. On the other hand, the decay of the vortex is characterized by the M_n p,plotted as a function ofthe same parameters and different degrees of degradation in Fig. <ref>. Again, the degradation presents a negative relation with the dimensionless parameter, and in this case M_n p decreases with the distance r.Next we study the parameters that characterize the transient dynamics of the vortex in relation to the changes in the rheological properties caused by the degradation process. As is shown in Fig. <ref>, the decay in the characteristic time, λ, can be adequately described by an exponential function λ(t)=A+Bexp(-(t/τ_λ)), with τ_λ=14.4 days. Theconstant term A in this expression indicates that, in agreement with the literature <cit.>, the polymers are not totally degraded when the agitation is extended at constantmixing regime. Instead, the length polymer distribution attains a stationary profile for which the most probable value is non-zero. This fact ensures that the elastic behavior does not disappear forlong times of agitation.On the other hand, we used the overshoot parameter to inspect the evolution of the flow.It should be noted that the overshoot depends on the radial coordinate and the rotational velocity.Then we considered the temporalaverage of the overshoot for different values of r and Ω. To do this we introduced a normalized overshoot M_p', defined as M_p'=M_p(t)/M_p(0), whereM_p(0) represents the value of the overshoot at the initial stage.The right panel of Fig. <ref> shows the values of the averaged M_p' as a function of time.As evidenced from this figure, the decaying of M_p' for different values of r and Ω are very similar.Since by definition M_p = M_p(0) M_p' and M_p(0) is a function of r and Ω, from the above considerations it follows that the overshoot can be written, for the considered range of parameters, as:M_p(r,Ω,t)=f(r,Ω)M_p'(t) .This expression is a consequence of the fact the evolution of M_p' is almost the same for all the considered experiments.On the other hand, the evolution of the overshoot M'_p cannot be described by a simple exponential function as is the case of the characteristic time. Thus we attempted to fit its variations to a function of the form M_p'=C+Dexp(-(t/τ_M)^a). The bestfit was obtained with a=4, τ_M=13.8 days. Remarkably, the decay characteristic times τ_λ and τ_M obtained with the fits of M_p' and λ are very similar (the difference is less than 4%). Afterobtaining the decay in time of M_p' and λ, we related these two quantities. The result is shown in the left panel of Fig. <ref>. As it can be seen, the value of M_p' is weakly dependent on λ, for high values of λ, but it quickly reduces for small values ofλ. This explains why the values of M_p exhibit a slow decay at the initial times while decreasing faster for later times.We also considered the relationship betweenM_p' and the elasticity number E, which compares De and Re. As mentioned previously, in the present system, E = νλ / R^2. Similar to the previous procedure applied to the evolution of λ, we obtained the decay of ν from the rheology and then the values of E. In the right panel of Fig. <ref> we show the variation of M_p' with E.From this figure, it can be concluded that the overshoot is useful as a parameter to characterize the properties of the polymeric fluid. § CONCLUSION We considered the transient dynamics of viscoelastic fluids, specifically, the formation and decay of vortex in cylindrical containers in terms of the mechanical degradation. The experimental setup consists of two concentric cylinders, the outer one remains stationary while the inner one can be set in motion or stopped abruptly so that the fluid inside forms a vortex or decays respectively. The fluid dynamics differs from that of a Newtonian fluid since the azimuthal velocity exhibits oscillations. These oscillations are characterized by the overshoot during formation or the undershoot in the case of decay. Both parameters, overshoot and undershoot, allow characterizing this phenomenon.We focused on the relationship between the rheological characteristic of each sample, affected by the mechanical degradation resulting fromthe long stirring periods of several days which lead to considerable alterations in the rheological properties of the samples, and changes in the overshoot and undershoot. From the experimental results, we observed a decrease in elastic properties due to degradation. The drop on the elastic number in the fluid is due to changes in the medium length of the polyacrylamide chains, which presents a positive correlation with the Deborah number. It should be noted that, despite affecting the viscoelastic properties of the samples, the maximum azimuthal velocity does not change substantially with degradation time and the changes in overshoot are due to the changes in the stationary state velocity.We obtained that the decay in the characteristic time, λ, can be adequately described by an exponential function λ(t)=A+Bexp(-(t/τ_λ)), with τ_λ=14.4 days. The presence of the constant term A in this expression indicates that, in agreement with the literature <cit.>, the polymers are not totally degraded by agitation. As a consequence, the relaxation time does not tend to zero when t →∞. We also obtained that the overshoot parameter M_p can be written as one function of time, that we callednormalized overshoot, and an amplitude that is afunction of r and Ω. In light of these results, we determined a relationship between the overshoot and the elastic number. A potential application of this results would be to characterize viscoelastic fluid properties by means overshoot measurements. elsarticle-num | http://arxiv.org/abs/2312.16345v1 | {
"authors": [
"Renzo Guido",
"Luis G. Sarasua",
"Arturo C. Marti"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231226221159",
"title": "The effect of mechanical degradation on vortex formation and decay in viscoelastic fluids"
} |
fancy plain plainiblabel[1]#1 akefntext[1] [0pt][r]thefnmark #1 1.125 *§0pt4pt4pt * §.§0pt15pt1pt[RO]1–LastPage [LE] 1–LastPage[ \begin@twocolumnfalse Temperature-dependent photoluminescence dynamics of CsPbBr_3 and CsPb(Cl,Br)_3 perovskite nanocrystals in a glass matrix Evgeniya V. Kulebyakina,^a Mikhail L. Skorikov,^a Elena V. Kolobkova,^b,c Maria S. Kuznetsova,^d Matvei N. Bataev,^d Dmitri R. Yakovlev,^a,e,f Vasilii V. Belykh,^∗^e Lead halide perovskite nanocrystals (NCs) in a glass matrix combine excellent optical properties and stability against environment. Spectral and temporal characteristics of photoluminescence from CsPbBr_3 and CsPb(Cl,Br)_3 NCs in a fluorophosphate glass matrix are measured in a temperature range from 6 to 270 K in order to reveal factors limiting their quantum yield and recombination dynamics. At low temperatures the recombination dynamics are characterized by three times on the order of 1 ns, 10 ns, and 1 μs. Relative contributions of the corresponding processes and their times are strongly temperature dependent. Emission intensity decreases with growing temperature and this effect is stronger in smaller NCs highlighting the role of the surface states. Model considerations accounting for the NC energy structure and presence of the surface states trapping electrons and hole are provided to describe experimental appearances. We conclude that the photoluminescence dynamics at low temperatures are dominated by carrier recombination and relaxation to shallow traps. At temperatures exceeding 100 K, the dynamics are affected by activation of carriers to the excited states. \end@twocolumnfalse]§ ^a P.N. Lebedev Physical Institute, Russian Academy of Sciences, 119991 Moscow,Russia ^b St. Petersburg State Institute of Technology (Technical University), 190013 St. Petersburg, Russia^c ITMO University, 199034 St. Petersburg, Russia ^d Spin Optics Laboratory, St. Petersburg State University, 198504 St. Petersburg, Russia ^e Experimentelle Physik 2, Technische Universität Dortmund, 44221 Dortmund, Germany ^f Ioffe Institute, Russian Academy of Sciences, 194021 St. Petersburg, Russia ^∗ Present address: Innolume GmbH, 44263 Dortmund, Germany; E-mail: [email protected]§ INTRODUCTIONLead halide perovskite semiconductors are known for more than a century <cit.>. However, only recently they emerged as a promising platform for photovoltaic applications <cit.>,which stimulated research in different directions revealing many advantages of perovskites. Among them are defect tolerance <cit.>, high quantum yield <cit.>, high efficiency of spin orientation by light <cit.>, ease of synthesis, and the possibility to form nanocrystals (NCs) <cit.>. In particular, perovskite NCs can be very promising for light-emitting devices <cit.>. They have relatively narrow emission spectrum with central wavelength controlled by the anion composition and NC size. However, colloidal lead halide perovskite NCs have rather poor stability when exposed to air, humidity, elevated temperatures, or intense light. A promising approach to solve this problem is to synthesize all-inorganic lead halide perovskite NCs embedded in glass <cit.>. Especially suitable is the fluorophosphate glass matrix <cit.>, which offers high chemical resistance to harmful environmental conditions and the ability to introduce high concentrations of halides ensuring also high quantum yield of NCs.For the further development of perovskite-based light-emitting devices it is crucial to get physical insight into processes that limit the quantum yield and radiative recombination rate in perovskite NCs in a glass matrix and to clarify how these parameters depend on the NC size and composition. To this end, it is straightforward to study spectra and dynamics of photoluminescence (PL) from NCs as a function of temperature. The temperature-dependent properties of the continuous-wave PL spectra of CsPbBr_3 NCs in a glass matrix at T=40-240 K were investigated in Ref. <cit.>, where, in particular, the activation-like decrease in the PLintensity and the broadening of the PL line with temperature were observed. In this paper, we present systematic studies of the PL spectra and recombination dynamics down to liquid-helium temperatures for CsPbBr_3 and CsPb(Cl,Br)_3 NCs of different sizes embedded into a fluorophosphate glass matrix. On the basis of a model involving surface traps, we discuss factors responsible for PL decrease with temperature increase and for the observed complex PL dynamics. We find that an increase in the NC size leads to the enhancement of their spectral homogeneity and makes the PL intensity less sensitive to temperature. At the same time, with the introduction of Cl, PL temperature quenching becomes more pronounced.§ PHOTOLUMINESCENCE UNDER CONTINUOUS-WAVE EXCITATION We show the results for three samples with CsPbBr_3 NCs of different sizes and for one sample with CsPb(Cl,Br)_3 NCs. The summary of NC parameters is presented in table <ref>. The PL spectra at a temperature of 6 K for the samples under study are shown in Fig. <ref>a. The energy of the PL maximum is controlled by the band gap of the material, the quantum confinement energy of electrons and holes in NCs, and Coulomb interaction energy between electron and hole (exciton binding energy). In particular, for the set of samples with CsPbBr_3 NCs different PL peak energies correspond to different NC sizes D. A decrease in the NC size from 16 to 9 nm for CsPbBr_3 NCs leads to a shift of the PL peak energy from 2.36 to 2.44 eV as a result of stronger quantum confinement. Meanwhile, the introduction of Cl further shifts the PL peak energy to 2.73 eV, as seen from the PL spectrum of CsPb(Cl,Br)_3 NCs with D = 8 nm. It is also apparent that the full width at half maximum (FWHM) of the PL line increases with a decrease in the NC size. This is related to the fact that the low-temperature PL linewidth is determined by inhomogeneous broadening arising from the spread in the NC sizes. For small values of D, the PL peak energy is more sensitive to the variation of D, which results in larger inhomogeneous broadening, see Eqs. (<ref>) and (<ref>) in the discussion section. The linewidth is varied from 12 meV for CsPbBr_3 NCs with D = 16 nm up to 55 meV for NCs with D = 9 nm.The energy structure of NCs becomes more apparent from the PL excitation spectra. Figures <ref>d–<ref>f show the PL intensity for the samples with different NC sizes as a function of the detection energy and excitation energy. The PL has a maximum when the detection energy corresponds to the NC ground state and the excitation energy corresponds to either ground or one of the excited states. For each state, the excitation energy corresponding to the PL maximum linearly increases with the detection energy as shown by dashed lines. This is related to the distribution over NC sizes within the ensemble. Different sizes along the PL distribution corresponding to the excitation into the first excited state are marked in Fig. <ref>d. For the sample with the largest average NC size and the smallest inhomogeneous broadening, up to five energy levels are resolved in the PL excitation spectrum (Fig. <ref>g). These levels correspond to states with different orbital quantum numbers, and their energies are well reproduced by the model of a spherical NC with infinite energy barriers presented in Supplementary Information. The difference between the position of the first excited state determined from the PL excitation spectrum and the exciton energy of 2.32 eV in bulk CsPbBr_3 <cit.> is used to determine the average NC size for a given sample. This method is insensitive to the Stokes shift that reduces the PL peak energy. However, we assume the same exciton binding energy for different NC sizes and quantum confined states, because its variation is small compared to the quantum confinement energy.With temperature increase, the PL spectrum is transformed as shown in Fig. <ref>b for CsPb(Cl,Br)_3 NCs. The energy of the PL peak shifts and the line broadens. Note that the PL spectra in Fig. <ref>b are normalized to their maximal intensity at the given temperature, while the integrated intensity varies with temperature (see below). It is noteworthy that the PL peak energy increases with temperature, in contrast to conventional semiconductors like GaAs. At high temperatures this increase changes to a slight decrease. The transmission spectrum of the same sample experiences similar evolution(Fig. <ref>c): the transmission minimum, corresponding to the exciton resonance, shifts to higher energy and broadens with increasing temperature.The temperature dependence of the integrated PL intensity for different samples is shown in Fig. <ref>a. Interestingly, below 40 K the intensity increases with temperature. Then it stays approximately constant in the range of 40–140 K, and decreases at still higher temperatures. We attribute the intensity variation with temperature to carrier activation to deep-level surface traps where they can recombine nonradiatively. This is discussed in detail in the following sections and in the Supplementary Information. We note that the rate of the high-temperature decrease in the PL intensity depends on the NC size and composition. We characterize this rate by the temperature T_1/2 at which the integrated PL intensity decreases to 50% of its maximal value. Figure <ref>c shows the dependence of T_1/2 on the NC diameter D. For the series of CsPbBr_3 samples, T_1/2 increases with D from 180 K for D=9 nm to 250 K for D = 16 nm. The fact that nonradiative recombination is more pronounced in smaller NCs validates the assumption about the surface origin of nonradiative centers in these samples. One can also see that the high-temperature decrease in the PL intensity in CsPb(Cl,Br)_3 NCs is significantly faster than in CsPbBr_3 NCs of similar size. This may result from the increased density of traps for NCs containing Cl.The temperature dependence of the PL FWHM for different samples is shown in Fig. <ref>b. The steady increase in the FWHM with T is related to the carrier activation from the ground state to excitedstates via phonon absorption. The corresponding activation energies as a function of the quantum confinement energy in NCs are shown in Fig. <ref>d. § PHOTOLUMINESCENCE DYNAMICS Next, we study the PL dynamics of the perovskite NCs under pulsed excitation. Figure <ref> shows the spectrally-integrated PL dynamics of CsPb(Cl,Br)_3 NCs at different temperatures in different time ranges. The dynamics is characterized by several components whose amplitudes and decay times depend on temperature. The fastest component with decay time of about 0.3 ns dominates at low temperatures and disappears at higher temperatures (Fig. <ref>a). The next component is characterized by a decay time of about 10 ns at T = 25 K (Fig. <ref>b). It is weak at low temperatures and dominates at higher temperatures. The decay time of the slowest component is as long as 10 μs at low temperatures (Fig. <ref>c) and drastically shortens with temperature increase, while its relative amplitude increases. For each temperature, a multiexponential fit was carried out to determine the characteristic decay times. The decay times of the three components as a function of temperature are shown by symbols in Figs. <ref>d,e for CsPb(Cl,Br)_3 and CsPbBr_3 NCs, respectively. It is interesting that the decay time of the slowest component first drastically decreases and then slightly increases with temperature. Qualitatively similar multicomponent PL dynamics with similar temperature dependences of the decay times is observed for other CsPbBr_3 NC samples.§ DISCUSSIONHere we outline a systematic picture of the PL properties of perovskite NCs, i.e., of the broadening and thermal quenching of the PL lines and the PL dynamics. The most straightforward and expected observation is the increase in the PL peak energy and inhomogeneous broadening of the PL line with a decrease in the average size of NCs (Fig. <ref>). The dependence of the PL energy on the NC size can be evaluated from the model of a spherically symmetric quantum dot with abrupt infinite potential barriers presented in Supplementary Information, section S1. In this model, the ground state energy E_1 with respect to the band gap E_g^bulk equalsE_1 = 2π^2ħ^2/μ D^2,where μ is the reduced mass of the electron and hole, 1/μ = 1/m_e + 1/m_h. We take the same effective masses of electrons and holes in CsPbBr_3 perovskitesequal to 0.3 m_0 <cit.>. Here we neglect the excitonic effect. Note that, in this model, the separation between the ground and the first excited states is also equal to E_1, and this fact is used to determine the NC average size from the PL excitation spectrum.Then, the inhomogeneous broadening, which determines the PL linewidth Γ_inh at low temperatures and arises from the spread of the NC sizes Δ D, can be evaluated using Eq. (<ref>) as Γ_inh = |∂ E_1/∂ D|Δ D:Γ_inh = 4π^2ħ^2 Δ D/μ D^3.So, the narrowest PL line is obviously obtained in the largest NCs, emitting at the lowest energies; for 16-nm CsPbBr_3 NCs,the FWHM is as small as 12 meV. It is interesting that inhomogeneous broadening of the PL line in CsPb(Cl,Br)_3 NCs is twice smaller than that in CsPbBr_3 NCs of similar size.The PL linewidth increases with temperature (Fig. <ref>b). This is explained by the dephasing of the exciton ground state in the NCs caused by the phonon-assisted transitions of electrons and holes to higher energy levels. This behavior is commonly described by the relationΓ (T) = Γ_inh + γ_ph1/exp(E_a/k_BT)-1.Here, γ_ph characterizes the strength of electron–phonon coupling, the energy E_a that determines the Bose–Einstein occupancy factor (exp(E_a/k_BT)-1)^-1 is the energy of the polar longitudinal-optical (LO) phonon mode, and we neglect the contribution from acoustical phonons. The values obtained from the fit are given in the table <ref>. This relation is well established for systems with continuous energy spectrum, where scattering by LO phonons simply limits the lifetime of radiative excitonic states (see, e.g., <cit.>). This relation also describes systems with discrete energy spectrum, such as epitaxial and colloidal quantum dots <cit.>. Although no real transitions can generally be induced by LO phonons in this case, dephasing caused by virtual transitions involving excited states (i.e., by the off-diagonal terms of the electron–phonon coupling Hamiltonian) leads to the behavior numerically well described by Eq. (<ref>) <cit.>.The perovskite crystal lattice features a great number of optical phonon modes, and it is not a priori clearif this simple formula can be valid in this case. However, a number of studies indicate that only one of the optical-phonon modes of this lattice possesses a large dipole moment and, therefore, is dominant in the Fröhlich carrier–phonon interaction <cit.>. For CsPbBr_3, the energy of this mode is about E_LO^eff = 20 meV <cit.>. Correspondingly, it was found that the temperature dependence of the PL linewidth in bulk and NC perovskite materials can be described by Eq. (<ref>) with E_a on the same order of magnitude (e.g. <cit.>).The PL linewidth in our samples [Fig. <ref>(b)] can also be well fitted with Eq. (<ref>). However, the activation energy E_a is not simply a constant that can be associated with the energy of a specific phonon mode. Only for the largest CsPbBr_3 NCs E_a is close to E_LO^eff, while it is noticeably larger for smaller NCs. The dependence of E_a on the electron–hole pair quantum-confinement energy E_1 in the NCs [the latter is calculated by Eq. (<ref>)] is shown in Fig. <ref>d and, for the series of CsPbBr_3 samples, follows a linear relation E_a = 0.4 E_1. We note that, in a quantum dot with infinite potential barriers, the carrier ground-state energy with respect to the bottom of the potential is almost equal to the separation between the ground and first excited states (see Supplementary Information, section S1). Then, for an electron–hole pair, the energy of the first excited state (when either of carries is on the next energy level) is equal to 1.5 E_1 (see Supplementary Information, section S1). Therefore, the value of E_a is close to the separation between the ground and first excited states of an electron–hole pair, which equals 0.5 E_1. The significance and, to that end, the universality of this relation (note that the point corresponding to the CsPb(Cl,Br)_3 sample deviates considerably) is the subject of further investigation. However, it may be speculated that this behavior occurs because there actually exist contributions from more than just one phonon mode to the dephasing rate, and the mode resonant with the energy of transitions from the ground to the first excited state has the greatest impact on Γ (T). Hence, this transition energy enters Eq. (<ref>).The decrease in the PL intensity with increasing temperature (Fig. <ref>a) is a general feature that is observed in many semiconductor structures, including perovskite NCs <cit.> and single crystals <cit.>. It is often attributed to the activation of nonradiative recombination with temperature. However, the latter process is accompanied by a corresponding shortening of the PL decay time. This is not the case in our experiments.For example, for CsPb(Cl,Br)_3 NCs the PL intensity drops by more than an order of magnitude as the temperature is increased from 100 to 270 K (Fig. <ref>a). Meanwhile, the short decay time decreases by only a factor of 3, and the long decay time even increases (Fig. <ref>d). To understand the observed quenching, first of all we note the dominant role of the NC surface, as the quenching rate becomes higher for smaller NCs. Then we take into account that the rate of nonradiative recombination in NCs in an inhomogeneous ensemble may vary from one NC to another. The PL dynamics in NCs with the slowest nonradiative recombination will still be determined by radiative processes, while PL in NCs with the fastest nonradiative recombination will decay so rapidly that we cannot detect it. Under reasonable assumptions, discussed below (see also Supplementary Information), most of the NCs will fall into one of these two categories, with the boundary between them shifting with the temperature. Therefore, anincrease in temperature will just lead to an increase in the number of NCs that recombine predominantly nonradiatively and do not contribute to PL and, thus, has no effect on the PL dynamics. This may take place in the following two scenarios which imply the existence of deep-level surface traps with relatively high potential barriers. As the temperature is increased, one of the carriers may be captured by such a trap. (i) In the first scenario, the carrier captured by a surface trap recombines nonradiatively with the carrier remaining in the core. (ii) In the second scenario, the NCs contain resident carriers that are captured by the surface at higher temperatures and assist the Auger nonradiative recombination of photoexcited electron–hole pair, which is very efficient for NCs with sharp confining potential <cit.>. Similar explanation was proposed for the temperature quenching of the quantum yield in CdSe/CdS colloidal NCs in ref. <cit.>, where the PL decay time showed an increase with temperature.In both scenarios, a NC remains radiative if the rate γ_nrexp(-E_tr/ k_BT) of carrier capture by deep-level traps characterized by a potential barrier E_tr and high-temperature nonradiative recombination rate γ_nr is much lower than the radiative recombination rate γ_r and becomes nonradiative when γ_nrexp(-E_tr/k_BT) ≫γ_r. In the intermediate regime, when neither of strong inequalities is satisfied, the NC contributes to PL and its PL decay time shortens with an increase in temperature. However, if γ_nr≫γ_r, the intermediate regime γ_nrexp(-E_tr/ k_BT) ∼γ_r for a given NC takes place only in a narrow temperature range. The smooth decrease in the PL intensity with temperature occurs because of the spread in the trap potential barrier heights E_tr <cit.>. As the temperature is increased, NCs with larger values of E_tr∼ Tln(γ_nr/γ_r) become switched off. The reduction of the PL intensity with temperature is fitted with the model presented in Supplementary Information, section S2 (solid lines in Fig. <ref>a) assuming a Gaussian distribution of trap barrier energies with mean value E̅_tr = 180 meV and spread Γ_tr = 70 meV, while the surface trap density is σ = 0.024 nm^-2 forCsPbBr_3 NCs and 0.077 nm^-2 forCsPb(Cl,Br)_3 NCs. It is shown that nonradiative recombination rate γ_nr decreases as the 3rd power of the NC diameter.The fit shows good agreement with experimental data, better than Arrhenius-like equation which is often used in the literature to fit temperature dependences of the PL intensity (see Supplementary Information) <cit.>.To explain the complex PL dynamics of the NCs, we take into account the following considerations. In the low-temperature regime, the excited states do not contribute to the PL dynamics since they are separated by several tens of meV. Nevertheless, even at low temperatures the dynamics is rather complicated, which cannot be explained by recombination from the ground state only. The presence of a long component with a lifetime strongly dependent on T (Fig. <ref>d,e) suggests the existence of dark exciton states. A long-lived component demonstrating similar behaviour with temperature was found for CsPbBr_3 and CsPbI_3 NCs and was attributed to the activation from a dark exciton state lying several meV below the bright state <cit.>. The dark state may be attributed to the spin-forbidden exciton state <cit.>. However, it cannot explain the existence of the intermediate component with a decay time of about 10 ns. A dark exciton state was indeed observed in the study of charge-carrier spin dynamics in the same CsPb(Cl,Br)_3 sample as presently investigated <cit.>. Based on an extremely small value of the electron–hole exchange splitting and the fact that the PL lifetime was independent of the magnetic field, this state was attributed to the spatially indirect exciton which forms when one of the carriers is captured by a potential trap at the NC surface. Building upon this interpretation, we assume that an electron and a hole shallow-level surface trap statesthat are separated by energy E_s from the respective particle's ground state (Fig. <ref>f) exist in the NC. Thus, the following electron–hole pair states should be considered at low temperatures: (i) both the electron and hole in the ground state of the NC, (ii) they both in the respective trap states, and (iii) either carrier is in the ground and the other in the trap state. As far as the electron–hole wavefunction overlap is small in the 2nd and 3rd cases, we assume that only state (i) is radiative, and the shortest PL component corresponds to radiative recombination and to the relaxation from this state to the state (iii) with the rate γ_r+γ_sg≈γ_r. The intermediate PL component corresponds to the relaxation of either carrier from the NC ground to the surface trap state with the rate γ_sg, while the second carrier is already in the trap. Note that theory predicts two PL components with very close decay times in the intermediate range (red curves in Fig. <ref>d,e) which can hardly be separated in the experiment. Finally, the long component corresponds to carrier activation from the trap to the NC ground state with time γ_sg^-1exp(E_s/T) (which leads to recombination whenever both carriers happen to be activated simultaneously). We also mention that the initial increase in the integrated PL intensity with T (Fig. <ref>a) may be related to the nonradiative recombination of carriers in these shallow traps. Indeed, at low temperatures carriers spend most of their time in trap states (having more chances to recombine nonradiatively) and are activated to the NC ground state at higher temperatures.Another pronounced feature of the PL dynamics is the increase in the decay time at high temperatures. This is a common feature for systems with continuous density of states such as quantum wells <cit.>. It is related to the filling with temperature of the reservoir of nonradiative exciton states with momenta beyond the light cone, while only excitons with momenta within the light cone and, thus, with low energy can recombine radiatively. We note that an increase in the PL lifetime with temperature was also observed in nanocrystals <cit.>. Much like in other systems, we attribute this increase to the activation of carriers to excited states, where their recombination is inhibited. As the ground-state occupancy decreases with temperature, so does the recombination rate.Based on the above considerations, we build a simple rate-equation model of the PL dynamics that is presented in Supplementary Information. It should be emphasized that the model accounts for the existence of the intermediate decay component only when traps for both types of carriers exist. It gives a reasonable fit to the experimentally determined decay times (see Figs. <ref>d,e). § CONCLUSIONSTo conclude, we have studied the optical properties of CsPbBr_3 and CsPb(Cl,Br)_3 NCs in a glass matrix: stationary PL and PL excitation spectra, transmission spectra, and PL dynamics. We have shown that an increase in the NC size results in weaker inhomogeneous broadening of the PL line corresponding to excitons. The exciton linewidth has activation-like dependence on temperature with an activation energy close to the interlevel separation in the NCs. We have observed PL quenching with temperature that is not accompanied by an increase in the PL decay rate that should be expected for the activation of nonradiative recombination. We show that this quenching is related to the existence of deep surface traps and becomes more pronounced with a decrease in the NC size and the introduction of Cl. For the largest CsPbBr_3 NCs, the PL intensity decreases by only 50% upon an increase in temperature from 100 to 270 K, which gives evidence of their high quality. The PL dynamics of the studied NCs at low temperatures is characterized by three time scaleson the order of 1 ns, 10 ns, and 1 μs, respectively. This dynamics is described by a model considering relaxation of both electrons and holes to shallow traps and their activation from the ground to the excited states. The fact that considerable thermal quenching of PL occurs only in smaller NCs indicates that the number of nonradiative recombination centers in the bulk is small and they are mostly located at the surface. § CONFLICTS OF INTERESTThere are no conflicts to declare.§ METHODSSamples Samples of fluorophosphate glass with composition of 40P_2O_5–35BaO–5NaF 10AlF_3–10Ga_3O_3 (mol. %) doped with NaCl (Ba_2Cl), Cs_2O, PbF_2 and BaBr_2 were synthesized using the melt-quench technique. Glass synthesis was performed in a closed glassy carbon crucible at a temperature T = 1000 ^∘C. About 50 g of the batch was melted in a crucible for 30 min, then the glass melt was cast on a glassy carbon plate and pressed to form a plate with a thickness of 2 mm. CsPbBr_3 or CsPb(Cl,Br)_3 perovskite NCs were precipitated by glass self-crystallization during melt-quenching and additional heat treatment at 400^∘C. The size of NCs in the samples varied from 8 to 16 nm depending on the annealing time. The composition of the sample with chlorine was evaluated using the X-ray diffraction method to be CsPb(Cl_0.5Br_0.5)_3<cit.>.Steady-state PL For PL measurements, the samples were placed in a helium vapor-flow cryostat to achieve temperatures from 5 to 270 K.The steady-state PL spectra were recorded with a resolution of 0.9 meV using a grating spectrometer equipped with a liquid-nitrogen-cooled charge-coupled-device (CCD) matrix detector. The sample was excited using a CW semiconductor laser with a wavelength of 405 nm and a power of 2–5 μW. The size of the laser spot on the sample was about 200 μm.Transmission spectra An incandescent tungsten ribbon lamp was used to measure the optical transmission spectra. The investigated sample was thinned to 43 μm. The spectra were recorded using the same grating spectrometer with a liquid-nitrogen-cooled CCD detector.Photoluminescence excitation spectra For the investigation of the PL excitation spectra, the samples were placed in a helium closed-cycle cryostat and cooled to a temperature of 12 K. The PL was recorded using aspectrometer equipped with a liquid-nitrogen-cooled CCD camera. Optical excitation was carried out using light from an incandescent lamp transmitted through a pre-monochromator and focused onto the sample in a spot with a diameter of 2 mm. The excitation power was about 0.4 μW. The experimental data were normalized to the power of the incident light at a given wavelength. The spectral resolution of the PL excitation spectra was determined by the FWHM of the light band transmitted through the pre-monochromator, which was about 7 meV (1.5 nm).Time-resolved PL The PL dynamics was measured with a Hamamatsu C5680 streak camera coupled to a grating spectrograph. The spectral resolution was typically about 4 meV, and the time resolution was about 0.01 of the measurement time range. In these measurements, the sample was excited at a wavelength of 400 nm by the second harmonic of 2.5-ps pulses from a Mira-900D mode-locked Ti:sapphire laser with a pulse repetition rate of 76 MHz, which could be lowered by a factor of 16–8192 using a pulse picker for time-resolved measurements in time ranges up to 100 μs. The diameter of the laser spot on the sample was about 0.5 × 0.3 mm and the pulse energy was about 0.5 pJ (corresponding to an average power of about 2 μW for a pulse repetition rate of 4.75 MHz). We have checked that we work in the linear regime, i.e. the shape of the PL dynamics is independent of the excitation power.§ AUTHOR CONTRIBUTIONSInvestigation, Ev.V.K., M.L.S., M.S.K., M.N.B.; samples preparation, El.V.K.; conceptualization, M.L.S., V.V.B.; methodology, Ev.V.K., M.L.S.; data analysis, Ev.V.K., M.S.K., M.N.B., V.V.B.; validation, Ev.V.K., M.L.S., El.V.K., M.S.K., M.N.B., D.R.Y., V.V.B.; supervision, M.L.S., D.R.Y., V.V.B.; funding acquisition, M.S.K., D.R.Y., V.V.B.; writing – original draft, V.V.B.; writing - review & editing Ev.V.K., M.L.S., El.V.K., M.S.K., M.N.B., D.R.Y.§ ACKNOWLEDGEMENTSWe acknowledge financial support by the Ministry of Science and Higher Education of the Russian Federation, Contract No. 075-15-2021-598 at the P.N. Lebedev Physical Institute. The work of M.S.K. (sample characterization) was supported by the Saint Petersburg State University through Research Grant No. 94030557. @ifundefinedendmcitethebibliography44 f subitem (mcitesubitemcount)[Wells(1893)]Wells1893 H. L. Wells, Zeitschrift für anorganische Chemie, 1893, 3, 195–210 [Moller(1958)]Moller1958 C. K. Moller, Nature, 1958, 182, 1436–1436 [Green et al.(2014)Green, Ho-Baillie, and Snaith]Green2014 M. A. Green, A. Ho-Baillie and H. J. Snaith, Nature Photonics, 2014, 8, 506–514 [Huang et al.(2017)Huang, Bodnarchuk, Kershaw, Kovalenko, and Rogach]Huang2017 H. Huang, M. I. Bodnarchuk, S. V. Kershaw, M. V. Kovalenko and A. L. Rogach, ACS Energy Letters, 2017, 2, 2071–2083 [Sutter-Fella et al.(2016)Sutter-Fella, Li, Amani, Ager, Toma, Yablonovitch, Sharp, and Javey]Sutter-Fella2016 C. M. Sutter-Fella, Y. Li, M. Amani, J. W. Ager, F. M. Toma, E. Yablonovitch, I. D. Sharp and A. Javey, Nano Letters, 2016, 16, 800–806 [Belykh et al.(2019)Belykh, Yakovlev, Glazov, Grigoryev, Hussain, Rautert, Dirin, Kovalenko, and Bayer]Belykh2019 V. V. Belykh, D. R. Yakovlev, M. M. Glazov, P. S. Grigoryev, M. Hussain, J. Rautert, D. N. Dirin, M. V. Kovalenko and M. Bayer, Nature Communications, 2019, 10, 673 [Odenthal et al.(2017)Odenthal, Talmadge, Gundlach, Wang, Zhang, Sun, Yu, Valy Vardeny, and Li]Odenthal2017 P. Odenthal, W. Talmadge, N. Gundlach, R. Wang, C. Zhang, D. Sun, Z.-G. Yu, Z. Valy Vardeny and Y. S. Li, Nature Physics, 2017, 13, 894–899 [Grigoryev et al.(2021)Grigoryev, Belykh, Yakovlev, Lhuillier, and Bayer]Grigoryev2021 P. S. Grigoryev, V. V. Belykh, D. R. Yakovlev, E. Lhuillier and M. Bayer, Nano Letters, 2021, 21, 8481–8487 [Kirstein et al.(2022)Kirstein, Yakovlev, Glazov, Evers, Zhukov, Belykh, Kopteva, Kudlacik, Nazarenko, Dirin, Kovalenko, and Bayer]KirsteinLead2022 E. Kirstein, D. R. Yakovlev, M. M. Glazov, E. Evers, E. A. Zhukov, V. V. Belykh, N. E. Kopteva, D. Kudlacik, O. Nazarenko, D. N. Dirin, M. V. Kovalenko and M. Bayer, Advanced Materials, 2022, 34, 2105263 [Kirstein et al.(2023)Kirstein, Kopteva, Yakovlev, Zhukov, Kolobkova, Kuznetsova, Belykh, Yugova, Glazov, Bayer, and Greilich]KirsteinML2023 E. Kirstein, N. E. Kopteva, D. R. Yakovlev, E. A. Zhukov, E. V. Kolobkova, M. S. Kuznetsova, V. V. Belykh, I. A. Yugova, M. M. Glazov, M. Bayer and A. Greilich, Nature Communications, 2023, 14, 699 [Vinattieri and Giorgi(2021)]Vinattieri2021 Halide Perovskites for Photonic, ed. A. Vinattieri and G. Giorgi, AIP Publishing, Melville, New York, 2021 [Vardeny and Beard(2022)]Vardeny2022 Hybrid Organic Inorganic Perovskites: Physical Properties and Applications, ed. Z. V. Vardeny and M. C. Beard, World Scientific, 2022 [Protesescu et al.(2015)Protesescu, Yakunin, Bodnarchuk, Krieg, Caputo, Hendon, Yang, Walsh, and Kovalenko]Protesescu2015 L. Protesescu, S. Yakunin, M. I. Bodnarchuk, F. Krieg, R. Caputo, C. H. Hendon, R. X. Yang, A. Walsh and M. V. Kovalenko, Nano Letters, 2015, 15, 3692–3696 [Liu et al.(2018)Liu, Luo, He, Liang, and Xiang]Liu2018 S. Liu, Y. Luo, M. He, X. Liang and W. Xiang, Journal of the European Ceramic Society, 2018, 38, 1998–2004 [Liu et al.(2018)Liu, He, Di, Li, Xiang, and Liang]Liu2018a S. Liu, M. He, X. Di, P. Li, W. Xiang and X. Liang, Ceramics International, 2018, 44, 4496–4499 [Li et al.(2017)Li, Hu, Zhou, Jiang, Cheng, He, Liang, and Xiang]Li2017 P. Li, C. Hu, L. Zhou, J. Jiang, Y. Cheng, M. He, X. Liang and W. Xiang, Materials Letters, 2017, 209, 483–485 [Ai et al.(2016)Ai, Liu, Wang, Xie, Han, and Zhao]Ai2016 B. Ai, C. Liu, J. Wang, J. Xie, J. Han and X. Zhao, Journal of the American Ceramic Society, 2016, 99, 2875–2877 [Ai et al.(2017)Ai, Liu, Deng, Wang, Han, and Zhao]Ai2017 B. Ai, C. Liu, Z. Deng, J. Wang, J. Han and X. Zhao, Physical Chemistry Chemical Physics, 2017, 19, 17349–17355 [Ye et al.(2019)Ye, Zhang, Zhao, Wang, Liu, Deng, Zhao, and Han]Ye2019 Y. Ye, W. Zhang, Z. Zhao, J. Wang, C. Liu, Z. Deng, X. Zhao and J. Han, Advanced Optical Materials, 2019, 7, 1801663 [Kolobkova et al.(2021)Kolobkova, Kuznetsova, and Nikonorov]Kolobkova2021 E. V. Kolobkova, M. S. Kuznetsova and N. V. Nikonorov, Journal of Non-Crystalline Solids, 2021, 563, 120811 [Yakovlev et al.(2023)Yakovlev, Crooker, Semina, Rautert, Mund, Dirin, Kovalenko, and Bayer]Yakovlev2023 D. R. Yakovlev, S. A. Crooker, M. A. Semina, J. Rautert, J. Mund, D. N. Dirin, M. V. Kovalenko and M. Bayer, physica status solidi (RRL) – Rapid Research Letters, 2023,2300407 [Kirstein et al.(2022)Kirstein, Yakovlev, Glazov, Zhukov, Kudlacik, Kalitukha, Sapega, Dimitriev, Semina, Nestoklon, Ivchenko, Kopteva, Dirin, Nazarenko, Kovalenko, Baumann, Höcker, Dyakonov, and Bayer]Kirstein2022 E. Kirstein, D. R. Yakovlev, M. M. Glazov, E. A. Zhukov, D. Kudlacik, I. V. Kalitukha, V. F. Sapega, G. S. Dimitriev, M. A. Semina, M. O. Nestoklon, E. L. Ivchenko, N. E. Kopteva, D. N. Dirin, O. Nazarenko, M. V. Kovalenko, A. Baumann, J. Höcker, V. Dyakonov and M. Bayer, Nature Communications, 2022, 13, 3062 [Rudin et al.(1990)Rudin, Reinecke, and Segall]Rudin1990 S. Rudin, T. L. Reinecke and B. Segall, Physical Review B, 1990, 42, 11218–11231 [Bayer and Forchel(2002)]Bayer2002 M. Bayer and A. Forchel, Physical Review B, 2002, 65, 041308 [Valerini et al.(2005)Valerini, Cretí, Lomascolo, Manna, Cingolani, and Anni]Valerini2005 D. Valerini, A. Cretí, M. Lomascolo, L. Manna, R. Cingolani and M. Anni, Physical Review B, 2005, 71, 235409 [Muljarov and Zimmermann(2007)]Muljarov2007 E. A. Muljarov and R. Zimmermann, Physical Review Letters, 2007, 98, 187401 [Pérez-Osorio et al.(2015)Pérez-Osorio, Milot, Filip, Patel, Herz, Johnston, and Giustino]Perez-Osorio2015 M. A. Pérez-Osorio, R. L. Milot, M. R. Filip, J. B. Patel, L. M. Herz, M. B. Johnston and F. Giustino, The Journal of Physical Chemistry C, 2015, 119, 25703–25718 [Wright et al.(2016)Wright, Verdi, Milot, Eperon, Pérez-Osorio, Snaith, Giustino, Johnston, and Herz]Wright2016 A. D. Wright, C. Verdi, R. L. Milot, G. E. Eperon, M. A. Pérez-Osorio, H. J. Snaith, F. Giustino, M. B. Johnston and L. M. Herz, Nature Communications, 2016, 7, 11755 [Iaru et al.(2021)Iaru, Brodu, van Hoof, ter Huurne, Buhot, Montanarella, Buhbut, Christianen, Vanmaekelbergh, de Mello Donega, Rivas, Koenraad, and Silov]Iaru2021 C. M. Iaru, A. Brodu, N. J. J. van Hoof, S. E. T. ter Huurne, J. Buhot, F. Montanarella, S. Buhbut, P. C. M. Christianen, D. Vanmaekelbergh, C. de Mello Donega, J. G. Rivas, P. M. Koenraad and A. Y. Silov, Nature Communications, 2021, 12, 5844 [Ramade et al.(2018)Ramade, Andriambariarijaona, Steinmetz, Goubet, Legrand, Barisien, Bernardot, Testelin, Lhuillier, Bramati, and Chamarro]Ramade2018 J. Ramade, L. M. Andriambariarijaona, V. Steinmetz, N. Goubet, L. Legrand, T. Barisien, F. Bernardot, C. Testelin, E. Lhuillier, A. Bramati and M. Chamarro, Nanoscale, 2018, 10, 6393–6401 [Shi et al.(2019)Shi, Zhang, Sun, Chen, and Zhang]Shi2019 H. Shi, X. Zhang, X. Sun, R. Chen and X. Zhang, The Journal of Physical Chemistry C, 2019, 123, 19844–19850 [Wu et al.(2019)Wu, Liu, Wang, Han, and Yang]Wu2019 W. Wu, W. Liu, Q. Wang, Q. Han and Q. Yang, Journal of Alloys and Compounds, 2019, 787, 165–172 [Lohar et al.(2018)Lohar, Shinde, Gahlaut, Sagdeo, and Mahamuni]Lohar2018 A. A. Lohar, A. Shinde, R. Gahlaut, A. Sagdeo and S. Mahamuni, The Journal of Physical Chemistry C, 2018, 122, 25014–25020 [Pham et al.(2022)Pham, Lee, Lee, and Chung]Pham2022 T. T. Pham, H. Lee, J. Lee and W. J. Chung, Journal of the Korean Ceramic Society, 2022, 59, 749–762 [Wei et al.(2016)Wei, Xu, Chen, Zheng, Cheng, and Jiang]Wei2016 K. Wei, Z. Xu, R. Chen, X. Zheng, X. Cheng and T. Jiang, Optics Letters, 2016, 41, 3821 [Wolf and Lee(2018)]Wolf2018 C. Wolf and T.-W. Lee, Materials Today Energy, 2018, 7, 199–207 [Cragg and Efros(2010)]Cragg2010 G. E. Cragg and A. L. Efros, Nano Letters, 2010, 10, 313–317 [Javaux et al.(2013)Javaux, Mahler, Dubertret, Shabaev, Rodina, Efros, Yakovlev, Liu, Bayer, Camps, Biadala, Buil, Quelin, and Hermier]Javaux2013 C. Javaux, B. Mahler, B. Dubertret, A. Shabaev, A. V. Rodina, A. L. Efros, D. R. Yakovlev, F. Liu, M. Bayer, G. Camps, L. Biadala, S. Buil, X. Quelin and J.-P. Hermier, Nature Nanotechnology, 2013, 8, 206–212 [Savchenko et al.(2022)Savchenko, Vokhmintsev, and Weinstein]Savchenko2022 S. Savchenko, A. Vokhmintsev and I. Weinstein, Journal of Luminescence, 2022, 242, 118550 [Rossi et al.(2020)Rossi, Qiao, Liu, Khurana, Akimov, Cheon, and Son]Rossi2020 D. Rossi, T. Qiao, X. Liu, M. Khurana, A. V. Akimov, J. Cheon and D. H. Son, The Journal of Chemical Physics, 2020, 153, 184703 [Shornikova et al.(2018)Shornikova, Biadala, Yakovlev, Sapega, Kusrayev, Mitioglu, Ballottin, Christianen, Belykh, Kochiev, Sibeldin, Golovatenko, Rodina, Gippius, Kuntzmann, Jiang, Nasilowski, Dubertret, and Bayer]Shornikova2018 E. V. Shornikova, L. Biadala, D. R. Yakovlev, V. F. Sapega, Y. G. Kusrayev, A. A. Mitioglu, M. V. Ballottin, P. C. M. Christianen, V. V. Belykh, M. V. Kochiev, N. N. Sibeldin, A. A. Golovatenko, A. V. Rodina, N. A. Gippius, A. Kuntzmann, Y. Jiang, M. Nasilowski, B. Dubertret and M. Bayer, Nanoscale, 2018, 10, 646–656 [Belykh et al.(2022)Belykh, Skorikov, Kulebyakina, Kolobkova, Kuznetsova, Glazov, and Yakovlev]Belykh2022 V. V. Belykh, M. L. Skorikov, E. V. Kulebyakina, E. V. Kolobkova, M. S. Kuznetsova, M. M. Glazov and D. R. Yakovlev, Nano Letters, 2022, 22, 4583–4588 [Feldmann et al.(1987)Feldmann, Peter, Göbel, Dawson, Moore, Foxon, and Elliott]Feldmann1987 J. Feldmann, G. Peter, E. O. Göbel, P. Dawson, K. Moore, C. Foxon and R. J. Elliott, Physical Review Letters, 1987, 59, 2337–2340 [Belykh and Kochiev(2015)]Belykh2015 V. V. Belykh and M. V. Kochiev, Physical Review B, 2015, 92, 045307 [ \begin@twocolumnfalse Supplementary information \end@twocolumnfalse]§ § S1. ENERGY STRUCTURE OF AN IDEAL SPHERICAL NC WITH INFINITE POTENTIAL BARRIERSHere we calculate the energies of the first few quantum confinement levels in a nanocrystal (NC). We consider the NC as a sphere with zero potential inside and infinitely high potential outside the sphere. The electron wave function (the same considerations hold for holes) can be decomposed into the product of the wave functions depending on the radial variable R_l(r) and angular variables Y_l,m(θ, ϕ), where l and m are the quantum numbers responsible for the square of the angular momentum and its z-axis component, respectively. The Schrödinger equation for the radial wave function reads as <cit.> d^2 R_l/dr^2+2/rdR_l/dr+ [k^2 - l(l+1)/r^2]R_l = 0,where k = √(2m_e E^e/ħ^2), E^e is the electron energy, and m_e is the effective mass. The solution of this equation up to a normalization factor is expressed through the spherical Bessel functions: R_l(r) = j_l(kr) <cit.>. The first three functions are j_0 (x) = sin(x)/x, j_1 (x) = sin(x)/x^2 - cos(x)/x, and j_2(x) = -3cos(x)/x^3 + (3/x^2 - 1/x) sin(x). We find k and, thus, E^e by imposing the boundary condition j_l(kD/2) = 0, where D is the NC diameter. The first few states, in order of increasing energy, are 1s, 1p, 1d, 2s ..., where letters “s”, “p”, and “d” correspond to l = 0, 1, and 2 (the level degeneracy being 2l + 1) and the numbers just enumerate consecutively levels with a given l. For the first 1s level kD = 2π we findE^e_1 = 2π^2ħ^2/m_e D^2.The next two levels correspond to l = 1 and 2 and have energies E^e_2 ≈ 2.0 E^e_1 and E^e_3 ≈ 3.4 E^e_1 <cit.>.The quantum confinement energy for an electron–hole pair is the sum of quantum confinement energies of the electron and hole; in particular, for the ground stateE_1 = 2π^2ħ^2/μ D^2,where 1/μ = 1 / m_e + 1 / m_h is the reduced mass.For CsPbBr_3 NCs m_e≈ m_h, and the lowest quantum confinement energies with corresponding states and their degeneracy are given in Table <ref>. We can compare the calculated energies with the positions of excited-state peaks in the PL excitation spectrum (Fig. 1g in the main text); up to five such peaks are observed for 16-nm CsPbBr_3 NCs. We find that the peak positions are perfectly described by the energies calculated for a spherical NC and correspond to states with the principal quantum number n=1 and different values of l. Note that the electron and hole in the NC should have the same quantum numbers to give nonzero matrix element and to contribute to the PL excitation spectrum. The states with n=2 have smaller l (compared to the states with n=1 in the same energy range) and lower degeneracy, so their contribution is too weak to be visible.We can also consider the shape of NC to be cubic, which, e.g.,may be the case for colloidal perovskite NCs <cit.>. The electron wave function in a cube is characterized by the momentum quantization numbers along the cube edges |n_1,n_2,n_3⟩. In the ground state, all quantum numbers are equal to 1 for both electron and hole, and the energy isE^cube_1 = 3π^2ħ^2/2μ L^2,where L is the cube edge length. In the next energy level, all quantum numbers are still 1 except one of them, which is 2 either for the electron or hole. This gives the degeneracy of 6 for the level and energy E^cube_2 = 1.5 E^cube_1. The next energy level is represented by two quantum numbers out of six (three for electron and three for hole) equal to 2. This gives level degeneracy of 15 and energy E^cube_3 = 2.0 E^cube_1. We see that the energy-level structure of the lowest states is not very sensitive to the NC shape. In particular, the separations between the first three levels, measured in units of E_1, is the same for spherical and cubic NCs. Note that, in the following model of electron–hole dynamics, we consider the first three energy levels only.§ S2. MODEL FOR PL INTENSITY QUENCHING WITH TEMPERATUREHere we assume that the decrease in the NC PL intensity with temperature is related to traps present at the NC surface with density σ. A carrier can be trapped once it overcomes a potential barrier with height E_tr, and then it recombines nonradiatively with the other carrier or mediates Auger recombination. Then, the rate of carrier trapping and nonradiative recombination is γ_nrexp(-E_tr/k_BT), and a NC is “dark” if γ_nrexp(-E_tr/k_BT) ≫γ_r and is “bright” if γ_nrexp(-E_tr/k_BT) ≪γ_r; here, γ_r and γ_nr are the radiative and the high-temperature nonradiative recombination rates, respectively. The intermediate regime γ_nrexp(-E_tr/k_BT) ∼γ_r takes place only in a narrow temperature range, provided that γ_nr≫γ_r. So we can assume that the given NC is “switched off” in PL when the temperature reaches the value T = k_BE_tr / ln(γ_nr/γ_r).In the following, these considerations are cast in a more quantitative form. Assume that the NCs are spherical and the number of traps n in an arbitrary NC is described by a Poissonian distribution with mean value of π D^2 σ:p_n = (π D^2 σ)^n/n!exp(-π D^2 σ).Let the rate of charge-carrier capture by a single trap be γ_trexp(-E_tr/k_BT) and the trap barrier energies be distributed according to the Gaussian function with mean E̅_tr and variance Γ_tr^2: f(E_tr) = 1/√(2π)Γ_trexp(-[E_tr - E̅_tr]^2/2Γ_tr^2).We assume Γ_tr≪E̅_tr, so that ∫_0^∞ f(E_tr) dE_tr≈∫_-∞^∞ f(E_tr) dE_tr = 1. The probability that a charge carrier in a NC containing one trap will recombine radiatively at temperature T, i.e. that trap energy is higher than k_BTln(γ_tr/γ_r), isF(T) = ∫_k_BTln(γ_tr/γ_r)^∞ f(E_tr) dE_tr≈ 1/2 - 1/2erf(k_BTln(γ_tr/γ_r) - E̅_tr/√(2)Γ_tr),where erf is the error function:erf(x) = 2/√(π)∫_0^x exp(-t^2)dt,erf(-∞) = -1, erf(∞) = 1. The cutoff temperature T̃ above which the capture of charge carriers at nonradiative recombination centers becomes faster than radiative recombination in a NC containing n such centers decreases logarithmically with n as T̃ = E̅_̅t̅r̅/k_Bln(n γ_tr/γ_r). However, if n ≪γ_tr/γ_r, we can disregard this dependence and take T̃ = E̅_̅t̅r̅/k_Bln(γ_tr/γ_r). Then, the probability F_n(T) that a NC containing n traps is still bright at temperature T isF_n(T) ≈ F(T)^n ≈ [1/2 - 1/2erf(k_BTln(γ_tr/γ_r) - E̅_tr/√(2)Γ_tr)]^n.The PL intensity is proportional to the probability P(T) that an arbitrary NC is still bright at temperature T: I(T) ∝ P(T) = ∑_n=0^∞ F_n(T) p_n ≈ exp(-π D^2 σ/2{ 1 + erf[ k_BTln(γ_tr/γ_r) - E̅_tr/√(2)Γ_tr]}). It is important to compare the dependences I(T) for NCs with different sizes. We assume that E̅_tr and Γ_tr are independent of D, while for γ_r we can use the experimental values neglecting its temperature dependence (note that γ_r is under logarithm). To determine the dependence of γ_tr on D we note that γ_tr is proportional to the probability of finding the electron near a trap, in the vicinity of the surface characterized by some small distance δ r ≪ D: γ_tr∝ R^2_0(D/2 - δ r),where R_0 =sin(2π r/D) / √(π D) r is the normalized radial wave function for l=0. Therefore, γ_tr∝ 4 πδ r^2 / D^5. The same dependence on D is obtained if we assume a finite height of the NC potential barriers and calculate the squared wave function exactly at the NC surface. Thus,γ_tr = α D^-5,where α is a constant factor. The mean total nonradiative recombination rate in NCs isγ_nr =π D^2 σγ_tr∝ D^-3. We fit the PL intensity temperature dependences at T > 50 K for the three CsPbBr_3 samples (Fig. <ref>a) with Eqs. (<ref>) and (<ref>) and determine the following parameters common for the samples: σ = 0.024 nm^-2, α = 3× 10^4 ps^-1nm^5, E̅_tr = 180 meV and Γ_tr = 70 meV. The fit shows good agreement with the experimental data. Using these values, we find that the total number of traps π D^2 σ is 6, 11, and 19, while the (high-temperature) capture rate γ_tr^-1 = 2, 8, and 35 ps for NCs with D = 9, 12, and 16 nm, respectively. The same parameters except for σ = 0.077 nm^-2 (π D^2 σ = 15) and γ_tr^-1 = 1 ps are used to fit the intensity temperature dependence for CsPb(Cl,Br)_3 sample (Fig. <ref>b). Thus, the introduction of Cl to the NCs leads to an increase in the density of surface traps as well as the capture rate per trap and, therefore, faster temperature quenching of the PL intensity.We note that the condition n ≪γ_tr/γ_r is not satisfied for the sample with the largest NCs. This points to the fact that, owing to a number of approximations made, the above analysis is not quantitatively exact. However, it demonstrates the self-consistency of the general picture, and the good agreement between the experimental data and the calculated curves underscores the adequacy of the model.For comparison, we also show the fit with Arrhenius-like equation (red dashed line in Fig. <ref>b)I(T) = I(0)/1 + κexp(-E_a/k_BT), where κ in the activation rate and E_a is the activation energy which equals 48 meV in this case. This equation is often used in the literature to fit the temperature dependence of the PL intensity <cit.>. This fit shows worse agreement with the experimental data than Eq. (<ref>), especially at high temperatures.§ S3. MODEL FOR ELECTRON–HOLE RECOMBINATION DYNAMICS IN NCSTo describe the complex PL dynamics of NCs, we use a simple rate equation model. We consider the first (ground, g) energy level and the second (excited, e) level for each carrier (Fig. 3f in the main text), and neglect the contribution from the higher levels. The separation between g and e levels E_ge is few tens of meV, so contributions from excited states are expected only at high temperatures. In fact, in the experiment we see a PL component with a very long decay time which strongly depends on temperature at low temperatures. This long PL decay component suggests the existence of a dark state below the g level.In ref. <cit.> in the same CsPb(Cl,Br)_3 sample a dark exciton state was revealed in the spin dynamics and was attributed to the spatially indirect exciton which is formed when one of the carriers is captured by a potential trap at the NC surface. Based on this interpretation, we assume the existence of shallow surface trap levels s for electrons and for holes in the NCs that are separated by energy E_s from the ground state (Fig. 3f in the main text). Taking into account that the effective masses of electrons and holes in CsPbBr_3 perovskites are almost the same (and equal to 0.3 m_0) <cit.>, we assume that the separations between the levels in the energy spectra of electrons and holes are the same, too. So E_s stands for the separation between the s and g levels and E_e stands for the separation between g and e levels for both electrons and holes. We can write the following rate equations for the exciton populations n_α,β, corresponding to the electron at a level α = s, g, e and hole at a level β = s, g, e: d n_s,α/dt = γ_sg (1+N_sg) n_g,α + γ_se (1+N_se) n_e,α- γ_sg N_sg n_s,α - G γ_se N_se n_s,α,d n_g,α/dt = γ_sg N_sg n_s,α + γ_ge (1+N_ge) n_e,α- γ_sg (1+N_sg) n_g,α - G γ_ge N_ge n_g,α - γ_r n_g,αδ_g,α, d n_e,α/dt = G γ_se N_se n_s,α + G γ_ge N_ge n_g,α- γ_se (1+N_sg) n_e,α - γ_ge (1+N_ge) n_e,αand similar equations for exchanged indices in n_α,β. Here, δ_g,α = 1 for α = g and δ_g,α = 0 for α≠ g; N_sg = [exp(E_s/k_BT)-1]^-1, N_ge = [exp(E_e/k_BT)-1]^-1, and N_se = [exp({E_s+E_e}/k_BT)-1]^-1 are populations of phonons at energies corresponding to the difference between the respective energy levels; γ_sg, γ_ge, and γ_se are the phonon-related relaxation rates between levels s–g, g–e, and s–e (we assume them equal for electrons and holes); and γ_r is the radiative recombination rate in the ground state.The factor G represents the effective number of excited states. In the simplest case, G is the degeneracy of the excited state (3 in our case). However, in the fit we take G = 8, which in part accounts for the large number of excited states which are not included explicitly in the model. For simplicity, we assume only one shallow trap per NC for electrons and one for holes. Introduction of multiple traps would introduce more parameters to the model, while not leading to any qualitative changes of the results. We assume that radiative recombination only takes place when both the electron and hole are in the g state. Indeed, the overlap of wavefunctions for s–s, s–g, e–g, and s–e states is vanishingly small, while the occupancy of the e–e state (1p–1p) is low (being the product of the electron and hole excited-state occupancies) and, taking into account selection rules, recombination from this state is further reduced by the degeneracy factor 2l+1. To fit the experimental data in Fig. 3d of the main text for CsPb(Cl,Br)_3, we use the following parameters: γ_r = 4 ns^-1, γ_sg = 0.04 ns^-1, γ_se = 0.5 ns^-1, and E_s = 2.2 meV. For the excited state energy separation E_e and the transition rate γ_ge, we use the values determined from the fit of the temperature dependence of FWHM with Eq. (3) in the main text: E_e = 28 meV and γ_ge = γ_ph / G = 30 ps^-1 for CsPb(Cl,Br)_3.The model gives four groups of relaxation times. In the first group, the relaxation times are of the order of γ_ge^-1 = 30 fs and are not revealed in our PL dynamics experiment. The other three groups follow the branches of the experimentally measured decay times. According to the model, at low temperatures the shortest decay time (of these three) is close to the recombination time γ_r^-1 = 0.25 ns. The intermediate time is close to the relaxation time from the ground state to the surface trap γ_sg^-1 = 25 ns. The longest time, which is strongly dependent on temperature, describes the activation of surface-trapped carriers to the ground state and can be estimated as (γ_sg N_sg)^-1.To fit the temperature dependence of the PL decay times for the largest 16-nm CsPbBr_3 NCs (Fig. 3e in the main text), we also used E_e and γ_ge determined from the FWHM temperature dependence (E_e = 15 meV and γ_ge = γ_ph / G = 10 ps^-1), while the other parameters are γ_r = 2 ns^-1, γ_sg = 0.05 ns^-1, γ_se = 0 ns^-1, andE_s = 1.3 meV. We note the almost twice smaller depth of the shallow surface traps E_s for the more homogeneous sample. The main fit parameters are summarized in table <ref>. 28 [S1]Landau1991 L. D. Landau and E. M. Lifshits, Quantum Mechanics: NonRelativistic Theory, Butterworth-Heinemann, Oxford, 1991.[S2]Flugge2012 S. Flügge, Practical Quantum Mechanics, Springer Science and Business Media, New York, 2012.[S3]Elsasser1933 W. Elsasser, Zeitschrift fur Physik, 1933, 81, 332–345.[S4]Akkerman2018S Q. A. Akkerman, G. Rainò, M. V. Kovalenko and L. Manna, Nature Materials, 2018, 17, 394–405.[S5]Ai2017S B. Ai, C. Liu, Z. Deng, J. Wang, J. Han and X. Zhao, Physical Chemistry Chemical Physics, 2017, 19, 17349–17355.[S6]Belykh2022S V. V. Belykh, M. L. Skorikov, E. V. Kulebyakina, E. V. Kolobkova, M. S. Kuznetsova, M. M. Glazov and D. R. Yakovlev, Nano Letters, 2022, 22, 4583–4588.[S7]Kirstein2022S E. Kirstein, D. R. Yakovlev, M. M. Glazov, E. A. Zhukov, D. Kudlacik, I. V. Kalitukha, V. F. Sapega, G. S. Dimitriev, M. A. Semina, M. O. Nestoklon, E. L. Ivchenko, N. E. Kopteva, D. N. Dirin, O. Nazarenko, M. V. Kovalenko, A. Baumann, J. Höcker, V. Dyakonov and M. Bayer, Nature Communications, 2022, 13, 3062. | http://arxiv.org/abs/2312.16685v1 | {
"authors": [
"Evgeniya V. Kulebyakina",
"Mikhail L. Skorikov",
"Elena V. Kolobkova",
"Maria S. Kuznetsova",
"Matvei N. Bataev",
"Dmitri R. Yakovlev",
"Vasilii V. Belykh"
],
"categories": [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231227185723",
"title": "Temperature-dependent photoluminescence dynamics of CsPbBr$_3$ and CsPb(Cl,Br)$_3$ perovskite nanocrystals in a glass matrix"
} |
UTF8 Department of Physics, University of Massachusetts, Amherst, Massachusetts 01003, USAUniversity of Regensburg, Universitatsstrasse 31, 93053 Regensburg, Germany Ames Laboratory, U.S. Department of Energy, Ames, Iowa 50011, USADepartment of Physics, University of Massachusetts, Amherst, Massachusetts 01003, USA We study the low-energy properties of the chiral Heisenberg chain, namely, a one-dimensional spin-1/2 isotropic Heisenberg chain with time-reversal symmetry-breaking pseudo-scalar chiral interaction. We employ the thermodynamic Bethe ansatz to find “chiralization", the response of the ground state versus the strength of the chiral interaction of a chiral Heisenberg chain. Unlike the magnetization case, the chirality of the ground state remains zero until the transition point corresponding to critical coupling α_c=2J/π with J being the antiferromagnetic spin-exchange interaction. The central-charge c=1 conformal field theories (CFTs) describe the two phases with zero and finite chirality. We suggest that the difference lies in the symmetry of their ground state (lightest weight) primary fields, i.e., the two phases are symmetry-enriched CFTs. At finite but small temperatures, the non-chiral Heisenberg phase acquires a finite chirality that scales with the temperature quadratically. We show that the finite-size effect around the transition point probes the transition.Unveiling chiral phases: Finite-size scaling as a probe of quantum phase transition in symmetry-enriched c=1 conformal field theories Tigran A. Sedrakyan January 14, 2024 ===========================================================================================================================================§ INTRODUCTION The one-dimensional (1D) spin-1/2 Heisenberg spin chain is a paradigmatic model in low-dimensional quantum condensed matter physics. The model can be solved exactly by algebraic Bethe ansatz (BA)<cit.> and serves as a fundamental platform for exploring quantum spin dynamics and entanglement, as well as for studying quantum phase transitions, critical phenomena, and exotic states of matter in one dimension. In the continuum (thermodynamic) limit, the model becomes critical, realizing a continuous conformal field theory (CFT) with the central charge, c=1. In the isotropic case, this criticality can be easily seen as the SU(2)-symmetric spin coupling being the critical point between the easy plane phase and the easy axis (antiferromagnetic) phase. The underlying conformal symmetry greatly simplifies the description of the model and plays a crucial role for understanding the scaling behavior and correlations of the system as it approaches the critical phase. Besides, from the exact solution, the leading-order scaling for the ground state energy, E ∼ (1/4-ln 2)N+O(1) and the gap, m ∼π^2/2N, where N≫ 1 is the number of spins, has been established in the literature, see e.g., Refs. yang1966oneI,yang1966oneII,yang1966oneIII. This scaling is also consistent with the CFT prediction <cit.>. Numerical exact diagonalization also confirms the same result, even with a relatively small system size <cit.>. Notably, the exploration of the isotropic Heisenberg chain extends beyond the spin-1/2 system, encompassing both analytical and numerical investigations <cit.>. Among many of them, Haldane postulated a significant conjecture: half-integer spin chains exhibit gapless behavior, while integer spin chains manifest an excitation gap <cit.>.One of the important aspects of the antiferromagnetic Heisenberg chain is its magnetization, which reflects the alignment of spins in response to an external magnetic field. The introduction of an external field perturbs the system, breaking the SU(2) symmetry.A finite external field triggers the spins to align parallel to the external field and develop a finite magnetization. Determining the magnetization of the chain, m, is attainable through various methods. In specific limits, Bethe ansatz provides a computation avenue <cit.>. Additionally, applying the worm algorithm Monte Carlo offers another viable approach <cit.>. Particularly, around the transition point to the full magnetization, the magnetization exhibits a square root scaling,m ∼1/2 - 1/π√(h_c-h), h<h_c,1/2, h≥ h_c,where h is the external field and h_c is the critical field.Here we consider the one-dimensional spin-1/2 anisotropic Heisenberg chain (XXZ model) with the scalar-chiral interaction, described by the Hamiltonian:H_Δ(J,α) = J ∑_n (s^x_n s^x_n+1+s^y_n s^y_n+1+Δs^z_n s^z_n+1)+ α∑_n𝐬_n · (𝐬_n+1𝐬_n+2), where the spin-1/2 operators are given in terms of Pauli matrices as s^μ=ħ/2σ_μ, μ=x,y,z. The Hamiltonian with parameters α and -α can be mapped to each other with all the spins flipped. Hence, without losing generality, it's assumed that α>0 throughout the paper. One might naturally anticipate a similar behavior in the case of a more general time-reversal symmetry breaking. However, in this paper, we present a different scenario where the time reversal 𝒯 and parity 𝒫 are both broken by a scalar triple product of neighboring spins, known as the chiral interaction, whereas the product of the two symmetries, 𝒫𝒯, remains preserved. We show that the chiralization exhibits a critical behavior different from the magnetization on both sides of the transition point. Specifically, in the isotropic Heisenberg limit at the quantum phase transition to the chiral phase, the chirality χ= ⟨𝐬_n · (𝐬_n+1𝐬_n+2) ⟩ scales linearly with the strength of the chiral interaction α:χ∼ 0,α<α_c,α - α_c,α≥α_c.The chirality remains zero while α does not exceed the critical threshold α_c=2J/π and increases linearly beyond the transition point. Furthermore, both phases with zero and non-zero chirality are gapless and can be described by conformal field theories with central charge c=1.Because the chiral central charge, (c_L - c_R), of a CFT in 1+1 dimensions corresponding to a lattice model vanishes<cit.>,the possibility of strong chiral interaction simply suppressing one of the chiral copies of the CFT leading to a chiral CFT is ruled out. Hence, we conclude that the crux of the distinction between the two phases with and without chiral order lies in their respective symmetries. The combination of the conformal symmetry with some additional symmetries is known to produce a multitude of distinct CFTs. For instance, a U(1) symmetry leads to an extended CFT featuring an emergent Kac-Moody algebra <cit.>. In addition, one can create a novel CFT by modding out a part of the symmetry, a concept known as orbifold CFT<cit.>.The chiralization transition considered in this paper can be interpreted as a transition between symmetry-enriched CFTs <cit.>. To outline the transition, note thatfor small values of α (α < α_c), the primary field associated with the emergent CFT is related to the x-component of the spin operator s_x (spin flip). This operator acquires an additional negative sign under the time reversal,(𝒫𝒯)^-1 s^x 𝒫𝒯 = -s^x.Conversely, for larger values of α (α > α_c), the ground state primary field is related to a phase twist pp ≡∏_n=1^L P_n(2π n/L-πΠ[n(L + 1)])where P_n(φ) is the phase shift gate on site n that maps the basis states |↓⟩→|↓⟩ and |↑⟩→ e^iφ|↑⟩. The function Π: ℤ→{0,1} is the parity of the argument.The phase twist p remains invariant under time reversal:(𝒫𝒯)^-1 p (𝒫𝒯) = p.The different behavior of low-energy excitations under (𝒫𝒯 symmetry makes it impossible to smoothly connect these two, Heisenberg-like and chiral phases, giving rise to a quantum phase transition between them. This should be contrasted to the conventional criticality involving the transformation of a system from one phase to another, characterized by the interplay between symmetry-breaking and symmetry-preserving mechanisms. In this paper, we show that the transition between the two phases can be detected via the universal finite-size scaling behavior of the excitation gap, the ground state energy, and the entanglement entropy. The theoretical underpinnings and numerical studies of finite-size scaling in time-reversal symmetric CFTs are well-established<cit.>.The universal finite size effects in time-reversal-symmetry broken CFTs have recently been derived for c=1/2 theories. In particular, it was shown that the terms in the excitation gap and the ground state energy inversely scaling with the system size appear universal and depend only on the central charge of the Virasoro algebra, c, and scale with the time-reversal symmetry-breaking term universally. For the ground state energy, E, of N spins with open or periodic boundary conditions, one obtains<cit.>E = ϵ N + b + cv/N[g(N m, ζ h^2 N) - θ] + O(1/Nln N),where ϵ is the bulk energy per spin, b is the boundary energy, c is the central charge of the corresponding CFT, v is the effective velocity, m is the spectral gap (the critical CFT behavior sets in in the m=0 limit), ζ is a model-dependent function of the localization length ξ, h is the external time-reversal symmetry breaking field (can be the magnetic field), and g(x) is a model-independent universal function. The parameter θ depends on the boundary conditions: θ=π/24 for the system with open boundary conditions, and θ = π/6 for the system with periodic boundary conditions. Here we numerically demonstrate that in the interacting case of the Heisenberg model with pseudo-scalar-chirality interaction, <ref> with Δ=1, the critical (m=0) scaling in <ref> preserves its form with the following time-reversal-symmetry breaking field:h→{√(α-α_c), forα≥α_c0, otherwise. .Thus, the scaling variable of the universal scaling function of <ref>, g(0,x), becomes x≡ζ h^2 N →ζ (α-α_c) N, with other terms preserving their physical meaning.<Ref> is one of the main results of this paper, with function g(0,x) playing the role of the Landau order parameter in detecting the quantum phase transition.Based on our analytical and numerical calculations, we present the phase diagram of the anisotropic Heisenberg spin-1/2 chain (XXZ model) in the presence of the scalar-chiral spin-interaction. The obtained schematic phase diagram is compared to the well-known magnetization phase diagram of the anisotropic Heisenberg chain in <ref>.Both the pseudo-scalar-chiral interaction and the external magnetic term break the time-reversal symmetry at the level of the Hamiltonian. We show that once a chiral term replaces the external magnetic field, the whole phase diagram of the model changes drastically. For example, a time-reversal symmetry-breaking chiral phase and corresponding tricritical points emerge at ferromagnetic and antiferromagnetic Heisenberg transitions. The state remains similar to the Heisenberg critical phase for the SU(2) symmetric spin-exchange interaction and with a small α<α_c.The remainder of the paper is organized as follows. In <ref>, we outline the basic properties of the Heisenbergmodel with the chirality term, including the ways of exactly calculating the scalar chirality order parameter, the bosonization of the model, and the qualitative description of the chiralization phase transition.In <ref>, we present the details of the algebraic and thermodynamic Bethe ansatz solution of the model, derive the exact expression for the chirality order parameter, and find the quantum phase transition. At finite temperatures, we find that the chirality scales with the temperature quadratically for all values of α, irrespective of the ground state chirality. In <ref>, we present the DMRG calculations of the finite size scaling of various characteristics of the model. In <ref> we present our concluding remarks. Details of the analytical calculations are presented in the appendices. § THE CHIRAL SPIN-1/2 CHIANWe start with the SU(2) symmetric case of the Hamiltonian <ref>, namely a one-dimensional spin-1/2 Heisenberg chain with the chiral interaction, described by the Hamiltonian,H_1(J, α) = J∑_n𝐬_n 𝐬_n+1 + α∑_n𝐬_n · (𝐬_n+1𝐬_n+2),where the periodic boundary conditions are imposed.Importantly, the chiral term in <ref> commutes with the first, Heisenberg exchange term, and hence the model can be solved and the free energy evaluated within the same thermodynamic Bethe Ansatz (TBA) framework that provides the solution of the exchange part only <cit.> (for a pedagogical review of TBA, see, e.g., Refs. <cit.>). In the phase diagram <ref>, the integrable region contains the vertical line at Δ=1, the horizontal axis with α=0, and the line J → 0. It is important to note that much like the chiral term in the Hamiltonian represents a conserved current of the isotropic Heisenberg model and adding it to the latter preserves the integrability, one can define a conserved anisotropic chirality of the XXZ model so that the addition of which to the XXZ Hamiltonian still preserves the integrability<cit.>. The same can be applied to various quantum integrable chains termed staggered integrable ladder models<cit.>.It would be very interesting to see the stabilization of quantum phases with finite anisotropic chirality. This study would help in understanding the nature of the quantum chiral spin liquid in two-dimensional frustrated systems<cit.>. The exact evaluation of the chiral order-parameter in model <ref> can be achieved by noting that once the free energy per spin, F(J,α), is evaluated, the chirality per spin can be found using the Hellman-Feynman theorem<cit.>,by taking the derivative of the free energy,χ= ⟨𝐬_n · (𝐬_n+1𝐬_n+2) ⟩ =∂_α F(J, α).Before we present the TBA solution of the model, in the remainder of the present section, it would be instructive to discuss the bosonization of the Hamiltonian, illustrating the CFT nature of the model. For the details of this technique, we refer to <cit.>.The bosonization of the Heisenberg (exchange) part is standard. Applying the Jordan-Wigner transformation and taking the continuous limit to the leading order in bosonic fluctuations, one getsH_1(J, 0)∼1/2 v ∫ dx [ g^-1 (∂_x ϕ)^2 + g (∂_x θ)^2],where v is the effective velocity, g is the Luttinger parameter, and ϕ and θ are the boson and dual boson fields. Along the same lines, one can bosonize the chiral term, yieldingH_χ≡ H_1(0, α)∼1/2α∫ dx ∂_x ϕ∂_x θ,where we have dropped the irrelevant terms.The subsequent perturbative renormalization group analysis shows that the chiral term is marginal to the Heisenberg part so that, depending on the magnitude of α, the effective CFT describing the model is either the same as that for H_1(J, 0) or a different one, which captures the non-zero chiral ordering.§ THE BETHE ANSATZ SOLUTIONIn this section, we closely follow the notations used in the existing literature on TBA <cit.>, and defer derivation details to <ref>. The thermodynamic Bethe equations for the chiral Heisenberg chain form a set of non-linear integral equations on the so-called n-string quasienergies, ϵ_n(x), parameterized by real x:ϵ_1(x) = -2 π (J - α∂_x) s(x)+ T s * ln(1+exp(ϵ_2(x)/T)), ϵ_n(x) = T s * ln[ (1+exp(ϵ_n-1(x)/T)) (1+exp(ϵ_n+1(x)/T)) ],lim _n →∞ϵ_n(x)/n = 0,where T is the temperature (the Boltzmann constant is set to 1), s(x)=1/4(π/2 x), and “*" denotes the convolution, f*g(x)≡∫_-∞^∞f(x-y)g(y)dy. The free energy per spin is given byF =-J(ln2-1/4)-T∫_-∞^∞ s(x)ln(1+exp(ϵ_1(x)/T))dx. §.§ The ground state chirality The ground state is found in the zero temperature limit, T → 0. In this limit, the Bethe equations and the free energy simplify toϵ_1(x)=-2π (J - α∂_x) s(x)+s*ϵ_2^+(x), ϵ_n(x)=s*(ϵ_n-1^+(x)+ϵ_n+1^+(x)), lim_n→∞ϵ_n(x)/n=0,andF=-J(ln 2-1/4)-∫_-∞^∞ s(x) ϵ_1^+(x) d x,whereϵ_n^+(x) ≡ϵ_n(x), for ϵ_n(x) ≥ 0, 0, for ϵ_n(x)<0. Thus, in <ref>, only ϵ_1 can take negative values. Depending on the ratio, α / J, the free energy takes different forms.For α < α_c = 2J/π, the system <ref> yields the solution with ϵ_1<0 and ϵ_n=0, n≥ 2. Therefore, the second term in <ref> disappears and f is independent on α:F(J, α)=-J(ln 2 - 1/4).Hence, according to <ref>, the chirality is zeroχ(α)=0,α < α_c = 2J/π. For α > α_c = 2J/π, on the other hand, ϵ_1(x) is monotonically increasing and undergoes a sign change. In this case, <Ref> is solved by the Wiener-Hopf method <cit.>. Suppose ϵ_1(x) crosses the zero at x=a,i.e., ϵ_1(a)=0. Then the free energy is found asF(α) =-J(ln 2 - 1/4) + 1/2∑_n,k=0^∞(-1)^k+n[1-πα(k+1/2)]×e^-π a(k+n+1)G(n)G(k)/k+n+1.whereG(k)=√(2π)exp{-k-1/2+(k+1/2)ln(k+1/2)}/Γ(k+1)is defined through the gamma function Γ, and a satisfies∑_k=0^∞(-1)^k[1-πα(k+1/2)]e^-π a (k+1/2)G(k)=0.or equivalently α = ∑_k=0^∞(-1)^k e^-π a (k+1/2)G(k)/∑_k=0^∞(-1)^k[π (k+1/2)]e^-π a (k+1/2)G(k)For the chirality per spin we find the expressionχ = -sgn(α) π/4( ∑_n(-1)^n e^-π a(n+1/2) G(n))^2. Thus we conclude that the ground state chirality is zero for small α up to α_c = 2J/π, where a chiralization transition occurs, leading to a non-zero ground state chirality for α>α_c. In the vicinity of the quantum phase transition, the analytical expression for chirality acquires a linear asymptote:χ = -sgn(α) π ^7/2/32 e (√(π)-√(e)γ_0) (α - 2/π),where γ_0 = ∑_k (-1)^k G(k) ≈ 0.557, producing the asymptotic behavior in Eq. <ref>. In <ref>, an exact closed form expression for χ is found, and the scaling of χ around the critical point, given by the above <ref>, is derived.§.§ Finite temperature corrections Before we proceed to further check the properties of the two ground states, we will first look at the finite temperature correction to the low α phase. To study the low temperature (Δ_T(ϵ_i) ≪ϵ_i) behavior, thermaldynamical BE <ref> is rewritten asϵ_1(x)= -2π (J - α∂_x) s(x)+s*ϵ_2^+(x) + s*Δ_T(ϵ_2),ϵ_n(x)= s*(ϵ_n-1^+(x)+ϵ_n+1^+(x)) + s*(Δ_T(ϵ_n-1) + Δ_T(ϵ_n+1)), lim_n→∞ϵ_n(x)/n=0,whereΔ_T(x) =T ln [1+exp(x/T)],x<0 T ln [1+exp(x/T)] - x,x≥ 0and free energy isf=-J(ln 2-1/4) - ∫_-∞^∞ s(x) ϵ_1^+(x) d x -∫_-∞^∞ s(x) Δ_T(ϵ_1) dx. One can treat the terms with Δ_T(ϵ_i) perturbatively, ϵ_1 = ϵ_1^(T=0) + s*Δ_T(ϵ_2^(T=0)) = ϵ_1^(T=0) + s*Δ_T(a_1*ϵ_1^+(T=0)),and the free energy becomesf = f^(T=0) - ∫_-∞^∞ s^2 * Δ_T(a_1*ϵ_1^+(T=0)) d x -∫_-∞^∞ s(x) Δ_T(ϵ_1^(T=0)) dx. For α < 2J/π, the ϵ_1^(T=0) = -2π (J - α∂_x) s(x) is negative, so the second term vanishesf = f^(T=0) -∫ s(x) Δ_T(ϵ_1^(T=0)) dx.Since f^(T=0) is a constant independent on αχ/N = ∂ f/∂α = -∫_-∞^∞ s(x) Δ_T'(ϵ_1^(T=0)) d ϵ_1^(T=0)/d α dx = -2π∫_-∞^∞ s(x) Δ_T'(ϵ_1^(T=0)) s'(x) dx.For low temperature T and small α, <ref> shows the linear in α and quadratic in T chirality. The T^2 behavior can be obtained by the method of Sommerfeld expansion<cit.>. The transfer matrix ln M(u) = u H_0 + 1/2 u^2 H_χ, where u is the spectral parameter, can also explain the quadratic temperature dependence. Both ln M(u) and H_0, which are identical to the Heisenberg model, have T^2 dependence. This restricts the temperature dependence of the chirality H_χ. § FINITE SIZE SCALING In this section, the density matrix renormalization group (DMRG) calculation is performed for the system with periodic boundary conditions to further understand the finite size corrections.The ground state energy and chirality per spin agree perfectly with the TBA result as shown in <ref>. The phase when α < 2J/π is in the same universality class as Heisenberg CFT whose gap at finite size (with an even number of total spins) decrees as system size goes up. When α > 2J/π, the gap closes because the combination of time reversal 𝒯 and parity reversal 𝒫 leaves the Hamiltonian invariant. Hence, 𝒫𝒯 relates the degenerate ground states. Both states have a central charge 1, which can be extracted from the entanglement entropy scaling S = c/3ln(n) where n is the subsystem size, shown in <ref>. At n=N/2, the entanglement entropy as a function of α exhibits a transition: starting at 1/3ln(N/π) for α=0 in consistent with the c=1 Heisenberg CFT, it remains constant and then decreases to a smaller value for entanglement entropy around α≈ 2/π, before reverting to 1/3ln(N/π) for significantly larger values of α≫ 2/π.When α < 2J/π, the chiral term in the Hamiltonian vanishes because of the vanishing chirality, hence the energy has the same scaling behavior as the Heisenberg CFT, that isE = E_0 N - π^2/12N+O(1/Nln N),as shown in <ref>, where E_0 = 1/4 - ln 2.While for α > 2J/π around the transition point,E = ϵ N + cv/N[g(0,ζ (α-α_c) N) - π/6]+O(1/Nln N),where the ϵ = 1/4 - ln 2 - κ (α - 2/π)^2, the constant κ≈ 0.370154 can be derived with TBA, see <ref>. The DMRG result also agrees with this as shown in <ref>. The shape of scaling function g(0,ζ (α-α_c) N) within the numerical precision agrees with the universal scaling function in time-reversal symmetry broken criticalities of Eq. <ref>, see <cit.>. More precisely, it agrees with the universal behaviorobserved in non-interacting systems <cit.>, when gap m → 0, while the form of the symmetry breaking scaling variable is different because of the strong interaction. § CONCLUSION We studied the chiralization of the Heisenberg chain with an additional scalar-chirality term, a conserved current of the Heisenberg model. We show that the ground state of this system maintains a zero chirality until it reaches a transition point α_c = 2J / π. Both of these phases are gapless and can be described by CFTs with an identical central charge of one. Their ground state or lightest weight primary fields transform differently under 𝒫𝒯 symmetry, making them symmetry-enriched CFTs. At low temperatures, the chirality develops as long as the chiral term is turned on, and the chirality exhibits a quadratic temperature dependence. This intriguing transition can be explored by considering the finite-size effects around the transition point. The linear in N term in the ground state energy shows a α^2 dependence. The ∼ 1/N ground state energy correction above the transition point in this system follows a universal function exhibiting non-monotonic behavior that detects the transition playing the role of the order parameter.§ BOSONIZATION OF THE CHIRAL HEISENBERG CHAIN One can perform the Jordan-Wigner transformation, take the continuous limit, and and then bosonize the Heisenberg chainH_0= J/2 (s_n^+ s_n+1^- + h.c.) + J s_n^z s_n+1^z= J/2 (c_n^† c_n+1 + h.c.) + J (n_n-1/2) (n_n+1-1/2) ∼ i J ∫ dx (ψ_R^†∂_x ψ_R - ψ_L^†∂_x ψ_L) + J ∫ dx (ρ_R^2 +ρ_L^2 + 4ρ_L ρ_R) ∼v/2∫ dx [ g^-1 (∂_x ϕ)^2 + g (∂_x θ)^2],whereψ_R =1/√(2π)e^i√(4π)φ_R ψ_L =1/√(2π)e^-i√(4π)φ_L,and ρ_R/L = :ψ_R/L^†ψ_R/L:. Umklapp terms are ignored. And we have defined the ϕ = φ_R + φ_L θ= φ_R -φ_Lto write the Hamiltonian into boson and dual boson representation.Following the same steps, one can bosonize the chiral termH_χ = i α/2∑_n [s_n^z(s_n+1^+ s_n+2^- - s_n+1^- s_n+2^+) +s_n+1^z(s_n+2^+ s_n^- - s_n+2^- s_n^+) +s_n+2^z(s_n^+ s_n+1^- - s_n^- s_n+1^+)] = i α/2∑_i [(n_i-1/2)(c_i+1^† c_i+2 - c_i+2^† c_i+1) -1/2 (c_i+2^† c_i - c_i^† c_i+2) +(n_i+2-1/2)(c_i^† c_i+1 - c_i+1^† c_i)] ∼ -1/4 i α∫ dxψ_L^†(x) ψ_L(x+2) + ψ_R^†(x) ψ_R(x+2) + h.c. = -1/2 i α∫ dxψ_L^†(x) ∂_x ψ_L(x) + ψ_R^†(x) ∂_x ψ_R(x) = 1/2α∫ dx (∂_x φ_R)^2 - (∂_x φ_L)^2 = 1/2α∫ dx ∂_x ϕ∂_x θ.After Bosonization, H_χ becomes summation of product of four ψ fields with up to second order derivatives and product of two ψ fields with first order derivatives. We drop the irrelevant terms, i.e., product of four fields, whose corresponding scaling dimension Δ^(4) = 2 + number of derivatives.§ TBA SOLUTION OF THE CHIRAL HEISENBERG CHAIN§.§ Bethe equationThe BE of the Hamiltonian <ref> is(x_j+i/x_j-i)^N=∏_l ≠ j(x_j-x_l+2 i/x_j-x_l-2 i),j=1, …, M.where x_jis the rapidity. The corresponding energy in terms of the rapiditiesE(x_1, …, x_M)=-∑_j=1^M(J - α∂_x_j) 2/x_j^2+1.The solution of the is hypothesized to live on a collection of strings<cit.>x_j→ x_μ^n, j=x_μ^n+i(n+1-2 j),j=1, …, n,where μ labels different strings, n labels the length of the strings and j labels the position on the string.<Ref> becomes e^N(x_μ^n, j)=∏_(m, ν) ≠(n, μ) e(x_μ^n, j-x_ν^m/m-1) e(x_μ^n, j-x_ν^m/m+1) ∏_j^'≠ j e(x_μ^n, j-x_μ^n, j^'/2), where we defined the function e(x) as follows:e(x) = x+i/x-i. This can be further simplified toe^N(x_μ^n/n)=∏_j=1^ne^N(x_μ^n,j)=∏_(m,ν)≠(n,μ)E_nm(x_μ^n-x_ν^m),if we define E_nm(x)≡{[ e(x/|n-m|)e^2(x/|n-m|+2)e^2(x/|n-m|+4)...e^2(x/n+m-2)e(x/n+m)forn≠ m,;e^2(x/2)e^2(x/4)...e^2(x/2n-2)e(x/2n)forn=m. ].The logarithm form of these equations givesNθ(x_μ^n/n) =2π I_μ^n+∑_(m,ν)≠(n,μ)Θ_nm(x_μ^n-x_ν^m),where Θ_nm(x)≡{θ(x/|n-m|)+2θ(x/|n-m|+2)+...+2θ(x/n+m-2)+θ(x/n+m)forn≠ m,2θ(x/2)+2θ(x/4)+...+2θ(x/2n-2)+θ(x/2n)forn=m. . andθ(x)≡2arctan(x). §.§ Thermodynamic Bethe equation In the thermodynamic limit, <ref> becomes2π∫^xρ_n(t)+ρ_n^h(t)dt=θ_n(x)-∑_m=1^∞∫_-∞^∞Θ_nm(x-y)ρ_m(y)dy,where ρ_n(x) (ρ_n^h(x)) is the density of the strings (holes).Differentiating with respect to xa_n(x)=ρ_n(x)+ρ̃_̃ñ(x)+∑_k T_j k * ρ_k(x),where T_n m(x) ≡{ a_|n-m|(x)+2 a_|n-m|+2(x)+2 a_|n-m|+4(x)+ ⋯+2 a_n+m-2(x)+a_n+m(x),forn ≠ m, 2 a_2(x)+2 a_4(x)+⋯+2 a_2 n-2(x)+a_2 n(x),forn=m . . anda_n(x) ≡1/πn/x^2+n^2, a_0(x) ≡δ(x). Free energy fer site f = e - Ts can be writtene=J/4+∑_n=1^∞∫_-∞^∞g_n(x)ρ_n(x)dx,whereg_n(x)≡ -2π(J - α∂_x)a_n,ands=∑_n=1^∞∫_-∞^∞ρ_n(x)ln(1+ρ_n^h(x)/ρ_n(x))+ρ_n^h(x)ln(1+ρ_n(x)/ρ_n^h(x))dx. Minimizing the free energy0= δ e-Tδ s =∑_n=1^∞∫ dx {[g_n(x)-Tln(1+ρ_n^h(x)/ρ_n(x))]δρ_n(x) . .-Tln(1+ρ_n(x)/rho_n^h(x))δρ_n^h(x)}. Using <ref>δρ_n^h(x)=-δρ_n(x)-∑_m∫ T_nm(x-y)δρ_m(y)dy. One arriveslnη_n=g_n/T+∑_k=1^∞ T_n k * ln(1+η_k^-1),where we define η_n(x) = ρ_n^h(x) / ρ_n(x).For n = 1,lnη_1=g_1(x)/T+a_2*ln(1+η_1^-1)+∑_j=2^∞(a_j-1+a_j+1)*ln(1+η_j^-1). Or equivalentlyln(1+η_1)=g_1(x)/T+∑_l=1^∞(a_l-1+a_l+1)*ln(1+η_l^-1).Using <ref>, we get a recursive relation for a_0, a_1 and a_2a_1*(T_n-1,m+T_n+1,m)-(a_0+a_2)*T_n,m=(δ_n-1,m+δ_n+1,m)a_1. Combining <ref>, the whole set of recursive relations of η_n read(a_0+a_2)*lnη_1(x) =g_1(x)/T+a_1*ln(1+η_2(x)), (a_0+a_2)*lnη_n(x) =a_1*ln(1+η_n-1(x))(1+η_n+1(x)),n=2,3,... Substituting <ref> to <ref>0= a_n*lnη_n+1-a_n+1*ln(1+η_n)-a_n+2*ln(1+η_n+1^-1) -∑_l=n+2^∞(a_l-1+a_l+1)*ln(1+η_l^-1). Or equivalently, we can rewrite lnη_n+1 in terms of other lnη_llnη_n+1 =a_1*lnη_n+a_2*ln(1+η_n+1^-1) +∑_l=n+2^∞(a_l-n-1+a_l-n+1)*ln(1+η_l^-1). For large n, ln(1+η_n^-1)≃ o(n^-2) and therefore:lim_n→∞lnη_n+1-a_1*lnη_n=0,or equivalentlylim_n→∞lnη_n/n=0. So we can summarize the thermodynamical BE aslnη_1(x)=-2π(J - α∂_x)/Ts(x)+s*ln(1+η_2(x)), lnη_n(x)=s*ln(1+η_n-1(x))(1+η_n+1(x)), *lim_n→∞lnη_n/n=0,wheres(x)=1/4(π x/2). The free energy becomesf =e-Ts =J/4+∑_n=1^∞∫ g_nρ_n-T[ρ_nln(1+η_n)+ρ_n^hln(1+η_n^-1)]dx. We eliminate ρ_n^h using <ref>f =J/4-T∑_n=1^∞∫ln(1+η_n^-1)a_n(x) +ρ_n[lnη_n-g_n/T-T_nm*ln(1+η_m^-1)]dx =J/4-T∑_n=1^∞∫ a_n(x)ln(1+η_n^-1(x))dx. Act ∫ dx s(x) on <ref>∫ dxs(x)ln(1+η_1) =1/T∫ s(x) g_1(x)dx +∑_l=1^∞∫ a_l(x)ln(1+η_l^-1(x))dx. So free energyf=-J(ln 2 - 1/4) - T∫ s(x)ln(1+η_1(x))dx.§.§ Low temperature limit To take the T → 0 limit, a new set of variables are introducedη_n(x)=exp{ϵ_n(x)/T},n=1… N. <ref> becomesϵ_1(x)= -2 π (J - α∂_x) s(x)+s * T ln(1+exp(ϵ_2(x)/T))ϵ_n(x)= s * T ln[ (1+exp(ϵ_n-1(x)/T)) . .(1+exp(ϵ_n+1(x)/T)) ]lim _n →∞ϵ_n(x)/n=0 And free energy becomesf=-J(ln2-1/4)-T∫ s(x)ln(1+exp(ϵ_1(x)/T))dx.§ CHIRALITY WHEN ALPHA > ALPHAC The chirality can be computed by taking derivative of the free energy <ref> with respect to α1/Nχ=df/dα= -sgn(α) 1/2∑_nk (-1)^k+mπ(k+1/2) e^-π a(k+n+1)G(n)G(k)/k+n+1+1/2∑_nk (-1)^k+m[1-πα(k+1/2)] × e^-π a(k+n+1) G(n)G(k) (-π) da/dα≡χ_1 + χ_2. By <ref>χ_2 =-π/2da/dα∑_n(-1)^nf_n∑_k[1+πα(k+1/2)](-1)^kf_k=0. Soχ/N =χ_1 = -sgn(α) 1/2∑_nk(-1)^k+nπ(k+1/2)e^-π a(k+n+1) G(n)G(k)/k+n+1=-sgn(α) 1/4∑_nk(-1)^k+nπ e^-π a(k+n+1) G(n)G(k) =-sgn(α) π/4 (∑_n(-1)^n e^-π a(n+1/2) G(n))^2.§ ENERGY'S QUADRATIC DEPENDENCE ON ALPHA NEAR TRANSITION POINTWe definef(a) = ∑_k (-1)^k e^-π a (k+1/2) G(k),which can be fitted with a simple function as shown in <ref>.f̃(a) = (2γ_0 - √(π/e)/e^π a + √(π/e))e^π a/2/e^π a + 1,where γ_0 = ∑_k (-1)^k G(k) ≈ 0.557.Then <ref> can be written asα = -f/f' = -f̃/f̃'= 2/π + 8 e^πa(√(π)-√(e)γ_0)/2 π√(e)(3 e^πa+1) γ_0+π ^3/2(-4 e^πa+e^2 πa-1). We are interested in the vicinity of α = 2/π and a → ∞. Under such limit,e^π a = 8 (√(π)-√(e)γ_0)/π ^3/2 (α - 2/π). The corresponding chiralityχ = -sgn(α) π ^2/4e^-πa-1= -sgn(α) π ^7/2/32 e (√(π)-√(e)γ_0) (α - 2/π),and hence the chiral part of the energy per particle becomesF_χ = π ^2/4e^-πa-1= -π ^7/2/64 e (√(π)-√(e)γ_0) (α - 2/π)^2 ≈ -0.370154 (α - 2/π)^2.This derivation reproduces Eq. <ref> of the main text. 50 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Takhtadzhan and Faddeev(1979)]takhtadzhan1979quantum author author L. A. Takhtadzhan and author L. D. Faddeev, title title The quantum method of the inverse problem and the heisenberg xyz model, https://doi.org/10.1070/RM1979v034n05ABEH003909 journal journal Russian Mathematical Surveys volume 34, pages 11 (year 1979)NoStop [Faddeev and Takhtadzhyan(1984)]faddeev1984spectrum author author L. D. Faddeev and author L. A. Takhtadzhyan, title title Spectrum and scattering of excitations in the one-dimensional isotropic heisenberg model, https://doi.org/10.1007/BF01087245 journal journal Journal of Soviet Mathematics volume 24, pages 241 (year 1984)NoStop [Yang and Yang(1966a)]yang1966oneI author author C. N. Yang and author C. P. Yang, title title One-dimensional chain of anisotropic spin-spin interactions. I. proof of Bethe's hypothesis for ground state in a finite system, https://doi.org/10.1103/PhysRev.150.321 journal journal Phys. Rev. volume 150, pages 321 (year 1966a)NoStop [Yang and Yang(1966b)]yang1966oneII author author C. N. Yang and author C. P. Yang, title title One-dimensional chain of anisotropic spin-spin interactions. II. properties of the ground-state energy per lattice site for an infinite system, https://doi.org/10.1103/PhysRev.150.327 journal journal Phys. Rev. volume 150, pages 327 (year 1966b)NoStop [Yang and Yang(1966c)]yang1966oneIII author author C. N. Yang and author C. P. Yang, title title One-dimensional chain of anisotropic spin-spin interactions. III. applications, https://doi.org/10.1103/PhysRev.151.258 journal journal Phys. Rev. volume 151, pages 258 (year 1966c)NoStop [Cardy(1986)]cardy1986logarithmic author author J. L. Cardy, title title Logarithmic corrections to finite-size scaling in strips, https://doi.org/10.1088/0305-4470/19/17/008 journal journal Journal of Physics A: Mathematical and General volume 19, pages L1093 (year 1986)NoStop [Calabrese and Cardy(2009)]calabrese2009entanglement author author P. Calabrese and author J. Cardy, title title Entanglement entropy and conformal field theory, https://doi.org/10.1088/1751-8113/42/50/504005 journal journal Journal of Physics A: Mathematical and Theoreticalvolume 42, pages 504005 (year 2009)NoStop [Medeiros and Cabrera(1991)]medeiros1991lanczos author author D. Medeiros and author G. G. Cabrera, title title Lanczos calculation for the s=1/2 antiferromagnetic Heisenberg chain up to N=28 spins,https://doi.org/10.1103/PhysRevB.43.3703 journal journal Phys. Rev. B volume 43,pages 3703 (year 1991)NoStop [Affleck et al.(1989)Affleck, Gepner, Schulz, andZiman]affleck1989critical author author I. Affleck, author D. Gepner, author H. J. Schulz, andauthor T. Ziman, title title Critical behaviour of spin-s Heisenberg antiferromagnetic chains: analytic and numerical results, https://doi.org/10.1088/0305-4470/22/5/015 journal journal Journal of Physics A: Mathematical and General volume 22, pages 511 (year 1989)NoStop [Führinger et al.(2008)Führinger, Rachel, Thomale, Greiter, and Schmitteckert]fuhringer2008DMRG author author M. Führinger, author S. Rachel, author R. Thomale, author M. Greiter, and author P. Schmitteckert, title title DMRG studies of critical SU(N) spin chains, https://doi.org/https://doi.org/10.1002/andp.20085201204 journal journal Annalen der Physik volume 520, pages 922 (year 2008), https://arxiv.org/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/andp.20085201204 https://onlinelibrary.wiley.com/doi/pdf/10.1002/andp.20085201204 NoStop [Haldane(1983)]haldane1983nonlinear author author F. D. M.Haldane, title title Nonlinear field theory of large-spin heisenberg antiferromagnets: Semiclassically quantized solitons of the one-dimensional easy-axis néel state, https://doi.org/10.1103/PhysRevLett.50.1153 journal journal Phys. Rev. Lett. volume 50, pages 1153 (year 1983)NoStop [Takahashi(1991)]takahashi1991correlation author author M. Takahashi, title title Correlation length and free energy of the S=1/2 XXZ chain in a magnetic field, https://doi.org/10.1103/PhysRevB.44.12382 journal journal Phys. Rev. B volume 44, pages 12382 (year 1991)NoStop [Granet et al.(2019)Granet, Jacobsen, and Saleur]granet2019analytical author author E. Granet, author J. L. Jacobsen, and author H. Saleur, title title Analytical results on the Heisenberg spin chain in a magnetic field, https://doi.org/10.1088/1751-8121/ab1f97 journal journal Journal of Physics A: Mathematical and Theoretical volume 52, pages 255302 (year 2019)NoStop [Kashurnikov et al.(1999)Kashurnikov, Prokof'ev, Svistunov, andTroyer]kashurnikov1999quantum author author V. A. Kashurnikov, author N. V. Prokof'ev, author B. V. Svistunov, and author M. Troyer, title title Quantum spin chains in a magnetic field, https://doi.org/10.1103/PhysRevB.59.1162 journal journal Phys. Rev. B volume 59, pages 1162 (year 1999)NoStop [Kapustin and Spodyneiko(2019)]kapustin2019absence author author A. Kapustin and author L. Spodyneiko, title title Absence of energy currents in an equilibrium state and chiral anomalies, https://doi.org/10.1103/PhysRevLett.123.060601 journal journal Phys. Rev. Lett. volume 123,pages 060601 (year 2019)NoStop [Zou et al.(2022)Zou, Shi, Sorce, Lim, andKim]zou2022modular author author Y. Zou, author B. Shi, author J. Sorce, author I. T. Lim, and author I. H. Kim, title title Modular commutators in conformal field theory, https://doi.org/10.1103/PhysRevLett.129.260402 journal journal Phys. Rev. Lett. volume 129,pages 260402 (year 2022)NoStop [Fan(2022)]fan2022from author author R. Fan, title title From entanglement generated dynamics to the gravitational anomaly and chiral central charge, https://doi.org/10.1103/PhysRevLett.129.260403 journal journal Phys. Rev. Lett. volume 129,pages 260403 (year 2022)NoStop [Hellerman et al.(2021)Hellerman, Orlando, and Watanabe]hellerman2021quantum author author S. Hellerman, author D. Orlando,and author M. Watanabe,@nooptitle Quantum information theory of the gravitational anomaly (year 2021), https://arxiv.org/abs/2101.03320 arXiv:2101.03320 [hep-th] NoStop [Ji and Wen(2022)]ji2022unified author author W. Ji and author X.-G. Wen,@nooptitle A unified view on symmetry, anomalous symmetry and non-invertible gravitational anomaly (year 2022), https://arxiv.org/abs/2106.02069 arXiv:2106.02069 [cond-mat.str-el] NoStop [Wang et al.(2022a)Wang, Zou, andVidal]wang2022emergence author author R. Wang, author Y. Zou, andauthor G. Vidal, title title Emergence of Kac-Moody symmetry in critical quantum spin chains, https://doi.org/10.1103/PhysRevB.106.115111 journal journal Phys. Rev. B volume 106, pages 115111 (year 2022a)NoStop [Ginsparg(1988)]ginsparg1988curiosities author author P. Ginsparg, title title Curiosities at c = 1, https://doi.org/https://doi.org/10.1016/0550-3213(88)90249-0 journal journal Nuclear Physics B volume 295, pages 153 (year 1988)NoStop [Dijkgraaf et al.(1989)Dijkgraaf, Vafa, Verlinde, andVerlinde]dijkgraaf1989operator author author R. Dijkgraaf, author C. Vafa, author E. Verlinde, andauthor H. Verlinde, title title The operator algebra of orbifold models,@noopjournal journal Communications in Mathematical Physics volume 123, pages 485(year 1989)NoStop [Chen et al.(2017)Chen, Roy, Teo, and Ryu]chen2017from author author X. Chen, author A. Roy, author J. C. Y. Teo, and author S. Ryu, title title From orbifolding conformal field theories to gauging topological phases, https://doi.org/10.1103/PhysRevB.96.115447 journal journal Phys. Rev. B volume 96, pages 115447 (year 2017)NoStop [Verresen et al.(2021)Verresen, Thorngren, Jones, andPollmann]verresen2021gapless author author R. Verresen, author R. Thorngren, author N. G. Jones, andauthor F. Pollmann, title title Gapless topological phases and symmetry-enriched quantum criticality, https://doi.org/10.1103/PhysRevX.11.041059 journal journal Phys. Rev. X volume 11, pages 041059 (year 2021)NoStop [Wang and Sedrakyan(2022)]ke2022universal author author K. Wang and author T. A. Sedrakyan, title title Universal finite-size amplitude and anomalous entanglement entropy of z=2 quantum Lifshitz criticalities in topological chains, https://doi.org/10.21468/SciPostPhys.12.4.134 journal journal SciPost Phys. volume 12,pages 134 (year 2022)NoStop [Affleck(1986)]affleck1986universal author author I. Affleck, title title Universal term in the free energy at a critical point and the conformal anomaly, https://doi.org/10.1103/PhysRevLett.56.746 journal journal Phys. Rev. Lett. volume 56, pages 746 (year 1986)NoStop [Gulden et al.(2016)Gulden, Janas, Wang, and Kamenev]gulden2016universal author author T. Gulden, author M. Janas, author Y. Wang, and author A. Kamenev, title title Universal finite-size scaling around topological quantum phase transitions, https://doi.org/10.1103/PhysRevLett.116.026402 journal journal Phys. Rev. Lett. volume 116, pages 026402 (year 2016)NoStop [Wang and Sedrakyan(2020)]wang2020universal author author K. Wang and author T. A. Sedrakyan, title title Universal finite-size scaling around tricriticality between topologically ordered, symmetry-protected topological, and trivial phases, https://doi.org/10.1103/PhysRevB.101.035410 journal journal Phys. Rev. B volume 101, pages 035410 (year 2020)NoStop [Tsvelik(1990)]tsvelik1990incommensurate author author A. M. Tsvelik, title title Incommensurate phases of quantum one-dimensional magnetics, https://doi.org/10.1103/PhysRevB.42.779 journal journal Phys. Rev. B volume 42, pages 779 (year 1990)NoStop [Mkhitaryan and Sedrakyan(2006)]mkhitaryan2006mean author author V. V. Mkhitaryan and author T. A. Sedrakyan, title title Mean-field theory for Heisenberg zigzag ladder: Ground state energy and spontaneous symmetry breaking, https://doi.org/10.1007/s00023-006-0294-4 journal journal Annales Henri Poincaré volume 7, pages 1579 (year 2006)NoStop [Mkhitaryan and Sedrakyan(2008)]mkhitaryan2008next author author V. V. Mkhitaryan and author A. G. Sedrakyan, title title Next-nearest-neighbor spin-spin and chiral-spin correlation functions in a generalized XXX chain, https://doi.org/10.1103/PhysRevB.77.035111 journal journal Phys. Rev. B volume 77, pages 035111 (year 2008)NoStop [Takahashi(1997)]takahashi1997thermodynamical author author M. Takahashi, title title Thermodynamical Bethe ansatz and condensed matter, in @noopbooktitle Conformal Field Theories and Integrable Models, editor edited by editor Z. Horváth and editor L. Palla (publisher Springer Berlin Heidelberg, address Berlin, Heidelberg, year 1997) pp. pages 204–250NoStop [Arnaudon et al.(2000)Arnaudon, Poghossian, Sedrakyan, andSorba]arnaudon2000integrable author author D. Arnaudon, author R. Poghossian, author A. Sedrakyan, and author P. Sorba, title title Integrable chain model with additional staggered model parameter, https://doi.org/https://doi.org/10.1016/S0550-3213(00)00409-0 journal journal Nuclear Physics B volume 588, pages 638 (year 2000)NoStop [Arnaudon et al.(2004)Arnaudon, Sedrakyan, and Sedrakyan]arnaudon2004multi author author D. Arnaudon, author A. Sedrakyan, and author T. Sedrakyan, title title Multi-leg integrable ladder models, https://doi.org/https://doi.org/10.1016/j.nuclphysb.2003.11.004 journal journal Nuclear Physics B volume 676, pages 615 (year 2004)NoStop [Sedrakyan(2001)]sedrakyan2001staggered author author T. Sedrakyan, title title Staggered anisotropy parameter modification of the anisotropic t-J model, https://doi.org/https://doi.org/10.1016/S0550-3213(01)00272-3 journal journal Nuclear Physics B volume 608, pages 557 (year 2001)NoStop [Ambjorn et al.(2001)Ambjorn, Arnaudon, Sedrakyan, Sedrakyan, and Sorba]ambjorn2001integrable author author J. Ambjorn, author D. Arnaudon, author A. Sedrakyan, author T. Sedrakyan, and author P. Sorba, title title Integrable ladder t-J model with staggered shift of the spectral parameter, https://doi.org/10.1088/0305-4470/34/30/301 journal journal Journal of Physics A: Mathematical and General volume 34, pages 5887 (year 2001)NoStop [Arnaudon et al.(2001)Arnaudon, Sedrakyan, Sedrakyan, andSorba]arnaudon2001generalization author author D. Arnaudon, author A. Sedrakyan, author T. Sedrakyan, andauthor P. Sorba, title title Generalization of the 𝒰_q(gl(n)) algebra and staggered models, https://doi.org/10.1023/A:1014504526934 journal journal Letters in Mathematical Physics volume 58, pages 209 (year 2001)NoStop [Sedrakyan and Chubukov(2009)]sedrakyan2009fermionic author author T. A. Sedrakyan and author A. V. Chubukov, title title Fermionic propagators for two-dimensional systems with singular interactions, https://doi.org/10.1103/PhysRevB.79.115129 journal journal Phys. Rev. B volume 79, pages 115129 (year 2009)NoStop [Sedrakyan et al.(2012)Sedrakyan, Kamenev, and Glazman]sedrakyan2012composite author author T. A. Sedrakyan, author A. Kamenev,and author L. I. Glazman,title title Composite fermion state of spin-orbit-coupled bosons, https://doi.org/10.1103/PhysRevA.86.063639 journal journal Phys. Rev. A volume 86, pages 063639 (year 2012)NoStop [Sedrakyan et al.(2014)Sedrakyan, Glazman, and Kamenev]sedrakyan2014absence author author T. A. Sedrakyan, author L. I. Glazman, and author A. Kamenev, title title Absence of bose condensation on lattices with moat bands, https://doi.org/10.1103/PhysRevB.89.201112 journal journal Phys. Rev. B volume 89, pages 201112 (year 2014)NoStop [Sedrakyan et al.(2015a)Sedrakyan, Galitski, and Kamenev]sedrakyan2015statistical author author T. A. Sedrakyan, author V. M. Galitski, and author A. Kamenev, title title Statistical transmutation in floquet driven optical lattices, https://doi.org/10.1103/PhysRevLett.115.195301 journal journal Phys. Rev. Lett. volume 115,pages 195301 (year 2015a)NoStop [Sedrakyan et al.(2015b)Sedrakyan, Glazman, and Kamenev]sedrakyan2015spontaneous author author T. A. Sedrakyan, author L. I. Glazman, and author A. Kamenev, title title Spontaneous formation of a nonuniform chiral spin liquid in a moat-band lattice, https://doi.org/10.1103/PhysRevLett.114.037203 journal journal Phys. Rev. Lett. volume 114,pages 037203 (year 2015b)NoStop [Wang et al.(2018)Wang, Wang, and Sedrakyan]wang2018chern author author R. Wang, author B. Wang, andauthor T. A. Sedrakyan,title title Chern-simons fermionization approach to two-dimensional quantum magnets: Implications for antiferromagnetic magnons and unconventional quantum phase transitions, https://doi.org/10.1103/PhysRevB.98.064402 journal journal Phys. Rev. B volume 98, pages 064402 (year 2018)NoStop [Maiti and Sedrakyan(2019)]maiti2019fermionization author author S. Maiti and author T. Sedrakyan, title title Fermionization of bosons in a flat band, https://doi.org/10.1103/PhysRevB.99.174418 journal journal Phys. Rev. B volume 99, pages 174418 (year 2019)NoStop [Wang et al.(2022b)Wang, Xie, Wang, and Sedrakyan]wang2022emergent author author R. Wang, author Z. Y. Xie, author B. Wang, and author T. Sedrakyan, title title Emergent topological orders and phase transitions in lattice chern-simons theory of quantum magnets, https://doi.org/10.1103/PhysRevB.106.L121117 journal journal Phys. Rev. B volume 106,pages L121117 (year 2022b)NoStop [Wei and Sedrakyan(2023)]wei2023chiral author author C. Wei and author T. A. Sedrakyan, title title Chiral spin liquid state of strongly interacting bosons with a moat dispersion: A monte carlo simulation, https://doi.org/https://doi.org/10.1016/j.aop.2023.169354 journal journal Annals of Physics volume 456, pages 169354 (year 2023)NoStop [Wang et al.(2023)Wang, Sedrakyan, Wang, Du, andDu]wang2023excitonic author author R. Wang, author T. A. Sedrakyan, author B. Wang, author L. Du, and author R.-R. Du, title title Excitonic topological order in imbalanced electron–hole bilayers,https://doi.org/10.1038/s41586-023-06065-w journal journal Nature volume 619, pages 57 (year 2023)NoStop [Feynman(1939)]feynman1939forces author author R. P. Feynman, title title Forces in molecules,https://doi.org/10.1103/PhysRev.56.340 journal journal Phys. Rev. volume 56,pages 340 (year 1939)NoStop [Giamarchi(2003)]giamarchi2003quantum author author T. Giamarchi, https://doi.org/10.1093/acprof:oso/9780198525004.001.0001 title Quantum Physics in One Dimension (publisher Oxford University Press, year 2003)NoStop [Ashcroft and Mermin(2022)]ashcroft2022solid author author N. Ashcroft and author N. Mermin, https://books.google.com/books?id=mGeEEAAAQBAJ title Solid State Physics (publisher Cengage Learning, year 2022)NoStop | http://arxiv.org/abs/2312.16660v1 | {
"authors": [
"Chenan Wei",
"Vagharsh V. Mkhitaryan",
"Tigran A. Sedrakyan"
],
"categories": [
"cond-mat.str-el",
"cond-mat.stat-mech",
"nlin.SI"
],
"primary_category": "cond-mat.str-el",
"published": "20231227181101",
"title": "Unveiling chiral phases: Finite-size scaling as a probe of quantum phase transition in symmetry-enriched $c=1$ conformal field theories"
} |
Ensemble Learning to Assess Dynamics of Affective Experience Ratings and Physiological ChangeAll authors contributed equally to this work.Felix Dollack1, Kiyoshi Kiyokawa1, Huakun Liu1, Monica Perusquia-Hernandez1, Chirag Raman2, Hideaki Uchiyama1, Xin Wei11Nara Institute of Science and Technology2Delft University of Technology{felix.d, kiyo, liu.huakun.li0, m.perusquia, hideaki.uchiyama, wei.xin.wy0}@is.naist.jp, [email protected] 14, 2024 =========================================================================================================================================================================================================================================================================================================================fancyThe congruence between affective experiences and physiological changes has been a debated topic for centuries. Recent technological advances in measurement and data analysis provide hope to solve this epic challenge. Open science and open data practices, together with data analysis challenges open to the academic community, are also promising tools for solving this problem. In this entry to the Emotion Physiology and Experience Collaboration (EPiC) challenge, we propose a data analysis solution that combines theoretical assumptions with data-driven methodologies. We used feature engineering and ensemble selection. Each predictor was trained on subsets of the training data that would maximize the information available for training. Late fusion was used with an averaging step. We chose to average considering a “wisdom of crowds” strategy. This strategy yielded an overall RMSE of 1.19 in the test set. Future work should carefully explore if our assumptions are correct and the potential of weighted fusion.affective computing, continuous ratings, biosignal processing, machine learning, data analysis challenge § INTRODUCTION Understanding human emotion is instrumental for applications in mental healthcare, education, and communication <cit.>. These applications aim to automatically assess and generate affective cues by relying on an assumed relationship between affective experience and physiological changes. However, the debate on the precedence of body changes or subjective experience started in the previous century and remains current <cit.>. Recent research has discussed whether demand characteristics affect bias in our understanding of the relationship between facial expression and affective experience <cit.>; and explored the relationship between dynamic Autonomic Nervous System responses and affective experiences <cit.>. Physiological sensing technologies have been popular in studying the physiological changes correlating with affective experiences <cit.>.Each physiological measurement type gives a different piece of information regarding the functioning of the sympathetic and parasympathetic nervous systems <cit.>, leading to a popular multidimensional dataset collection. Traditional data analysis techniques require extensive knowledge about physiology characteristics, signal processing, and domain knowledge in affective sciences. This domain knowledge gave birth to hand-crafted feature engineering that improves data interpretability and reduces the number of comparisons to be made when analyzing the data.Recent advances in Machine Learning (ML) and data-driven analyses have brought a new perspective. Purely data-driven analyses with end-to-end automated processing have become popular <cit.>. In end-to-end approaches, a machine learning network learns an intermediate representation of the input, thereby reducing manual work, and potentially enhancing the results <cit.>. However, the evidence does not always support this claim. A previous study showed that convolutional and recurrent neural networks yielded better results than other state-of-the-art methods <cit.>. Another study used a deep-learning approach to estimate momentary emotional states from multi-modal physiological data; and reported a higher correlation than traditional methods. Still, their mean absolute error (MAE) was higher (a lower MAE is better) <cit.>. Finally, another study found that end-to-end processes are suitable for predicting stress states with abrupt changes, but not as good when assessing subtle affective states like enjoyment <cit.>. Hence, end-to-end learning only provides a marginal improvement over feature engineering for physiological signal-based affect recognition. This is different from camera-based recognition, where performance is radically improved. One possible explanation is the limited amount of physiological data publicly available. Therefore, public data sets and multi-laboratory collaborations are necessary to assess the effectiveness of different training methods and cross-validation strategies.The EPiC challenge aims to overcome the limitations in data availability and motivates researchers to work on the affect-embodiment coherence problem.Our team used theory-driven analysis and data engineering techniques to address the EPiC challenge. We opted for feature engineering and ensemble learning for our final submission. Also, we report an exploratory analysis validating our assumptions for the challenge submission. § RELATED WORKML has been used to model emotion recognition mechanisms from data following the public release of benchmark databases. The basic procedure of classical ML-based methods consists of four steps: physiological signal collection while eliciting participants' emotions, feature extraction from the signals, training a classification model with the features, and emotion recognition based on the trained model <cit.>. Research issues include the design of discriminative features and the selection of the optimal classification technique. For instance, hypothesis testing is performed over some features followed by a predictive model that makes feature selection to see if the tested features were still relevant when all features were considered together <cit.>.It has been suggested that group synchrony improved arousal and valence classification <cit.>. Electrocardiograms (ECG) and Electrodermal Activity (EDA) have been used as tabular data withAutoGluon-Tabular to arousal and valence across individuals and datasets <cit.> with similar accuracy (around 56-62% respectively) to previous works. Nevertheless, the subject-independent classification remains only slightly above chance level. A crucial challenge surrounding continuous-time annotation of emotions is the lag between observed features and the reported emotion measures <cit.>.This lag arises from the time the rater requires to provide feedback about the experienced emotion. Such a temporal misalignment between features and labels has consequences for ML methods. Consequently, several compensation techniques have been investigated <cit.>. These methods involve estimating the reaction lag from the data, by maximizing the correlation coefficient <cit.> or the mutual information between the multimodal features and emotional ratings <cit.>. Others have used a recurrent neural network to handle the asynchronous dependencies <cit.>. Deep learning (DL) has also been used in affective computing.Handcrafted feature design is not always necessary in DL <cit.>, also, less modalities seem to be required to achieve equal performance. For example, <cit.> presented a DL approach to estimate momentary emotional states from multi-modal physiological data. Used modalities included respiration, ECG, electromyography (EMG), EDA, and acceleration. The best emotion classification was achieved by a traditional method with 79% F1-score when all four physiological modalities were used. In contrast, using only two modalities, DL achieved 78% F1-score. Furthermore, there are several fusion strategies for multi-modal data: feature-level fusion, decision-level fusion, model-level fusion, and hybrid-level fusion. Late fusion by averaging class probabilities has performed well in the past <cit.>. In the case of continuous detection of valence and arousal, a previous work obtained 0.43 and 0.59 RMSE for valence and arousal, respectively <cit.>, in the WESAD dataset. When dividing the continuous annotation into binary valence-arousal categories (high-low), another group of researchers reported subject-independent accuracy of 76.37% and 74.03% for valence and arousal, respectively <cit.>, on the Continuously Annotated Signals of Emotion (CASE) dataset <cit.>. § CHALLENGE CORPORAThe challenge corpora is an open dataset that collected six physiological signals while 30 participants (15 female, age range: 22-37 years) watched eight videos <cit.>. The videos aimed to elicit a range of emotions and were rated with continuous self-reported valence and arousal in the range of 0.5 to 9.5 using a joystick. Visual feedback was provided using the Self-Assessment Manikin <cit.>. Two videos were chosen per quadrant in the valence-arousal space, often called affect grid <cit.>, and were presented in pseudo-random order to the participants. <ref> shows the video distribution. The physiological sensing was logged at 1000 Hz, and the continuous annotation was done at 20 Hz. The physiological sensors included are:* Cardiac activity as measured from Electrocardiography (ECG) and Photoplethysmography (BVP).* Muscle activity (EMG) recorded from three muscles: the corrugator supercilii (emg_coru), the zygomaticus major (emg_zygo), and the trapezius (emg_trap). * Electrodermal activity (EDA) measured from the non-dominant hand.* Respiration (RSP) recorded from the chest.* Skin Temperature (SKT) recorded from the little finger of the non-dominant hand.For the EPiC Challenge, the Dataset was arranged in four scenarios to test four assumptions about the relationship between affective experiences and embodied cues. Each scenario is divided into training and test sets, as prepared by the organizers [See: https://github.com/Emognition/EPiC-2023-competition]. §.§ Across-time scenarioThis scenario evaluates subject-dependent and affective context-dependent model performance. The model is trained and tested over different durations of one data file (sub_vid). In this scenario, each of the 240 data files, that is, 30 participants watching eight videos, is divided into training and test parts based on the time series. For each data file, the earlier part is the training data, and the latter is the test data. The training and test data are not consecutive but are spaced by an unknown length of time. The length of the training data ranged from 48 s to 127 s depending on the size of each video, with an average of 88 s. All test data files were 50 s in length. §.§ Across-subject scenarioThis scenario evaluates subject-independent model performance. The model is trained on data from some participants and tested on data from another set of different, unseen, participants to verify the model's generalization ability to new people. In this scenario, the data of 30 participants were divided into five groups. Each group contains six participants watching eight videos for a total of 48 data files. This scenario consisted of five folds to use the cross-validation strategy. In each fold, the data of four groups of participants were set as the training data (192 data files in total). The 48 data files from the remaining six participants were set as the test data. The length of the training data files ranged from 50 s to 128 s, with an average of 90 s. All test data files lasted 50 s. §.§ Across-elicitor scenarioThis scenario evaluates affective context-independent model performance. The model is trained on data from several affective contexts and then tested on data from a different affective context. Each affective context represents one quadrant in the valence-arousal affect grid. There were two elicitors (i.e., videos) per affective context. This verifies whether the model can infer from the physiological signals triggered by one affective context to the physiological features triggered by another affectivity. In this scenario, the data from eight videos were divided into four groups, according to the video's affective context. This resulted in four categories: low valence, high arousal; high valence, high arousal; high valence, low arousal; and low valence, low arousal. Each group contains 60 data files, which corresponds to 30 participants with two videos each.The two videos in one group are considered to trigger the same type of affectivity. By adopting a cross-validation strategy, this scenario contains four folds. In each fold, three groups, totaling 180 data files, were chosen as the training data for the challenge. The remaining two videos for a total of 60 data were chosen as the test data. §.§ Across-version scenarioThis scenario evaluates affective context-dependent model performance. The model is trained on data from a specific affective state instantiation and then tested on the other version of the same affective contexts to verify the model's generalizability across similar affective contexts. The data from eight videos were divided into two groups. Each group contains 30 users watching four video types covering all quadrants in the affect grid, for a total of 120 data files. This scenario consists of two folds. In each fold, one group is the training data, while the other is the test data.§ TACKLING THE CHALLENGE§.§ PreprocessingWe used NeuroKit2 to preprocess the physiological signals and feature extraction <cit.>. In the case of EMG, we opted to write a custom preprocessing pipeline based on <cit.> in combination with Neurokit.The EMG pipeline to clean the signal consisted of a series of notch filters at frequencies of 60, 120, 180, and 240 Hz with a notch width of 3 Hz, followed by a bandpass filter with cutoff frequencies of 5 Hz and 250 Hz, and a detrending step. A z-transform is applied to the clean signal, before calculating the root mean square (RMS) over windows of 100 ms. To further smoothen the envelope, a Savitzky-Golay filter of third order with a length of 1 s was applied. This cleaned signal was then input to Neurokit's amplitude function to keep the returned data structure consistent for Neurokit's analyze function. The Neurokit's analyze function was used to extract the features listed in <ref>. Feature extraction was performed over the whole signal using windows of different sizes as shown in <ref>. These windows are described in samples at 1 KHz. ECG and PPG's window size is longer to sample heart-rate variability. A similar reason applies to respiration. Regarding EDA, a medium-sized window is recommended to decompose the signal into tonic and phasic components. In contrast, the EMG window is shorter, because changes in facial expressions' EMG can happen in the order of milliseconds. As a curious fact, we found a heart-rate artifact in the trapezius EMG. This artifact might be misinterpreted as an EMG feature, but we decided to leave it in because we did not formally distinguish between data types when modeling the data.The feature vectors were sampled at 20 Hz to match the sampling frequency of the annotations. Additionally, the preprocessed (clean) data temporally surrounding each annotation datapoint were flattened and input as additional features to the modeling block. §.§ Model trainingWe used a consistent architecture across all four scenarios, but adopted unique strategies for designing the input of model training and generating predictions to accommodate the distinct requirements of various scenarios.For the architecture, we employed AutoGluon, an open-source AutoML framework developed by AWS, to train our model <cit.>. This framework expedites the development of machine learning models by automating model training, hyperparameter optimization, and model selection and ensembling. The AutoGluon-Tabular fits a total of 11 models that includes gradient boosting methods (CatBoost, LightGBM, LightGBMLarge, LightGBMXT, XGBoost), extra trees (ExtraTreesMSE), K-nearest neighbors algorithm (KNeighborsDist, KNeighborsUnif), neural networks (NeuralNetFastAI, NeuralNetTorch), and random forests (RandomForestMSE). Furthermore, a weighted ensemble model (WeightedEnsemble_L2) is fitted and employed to combine the previously-trained models for generating predictions. To achieve optimal performance with AutoGluon, we designate the parameter presets as `best_quality', allowing AutoGluon to automatically construct robust model ensembles while allocating sufficient training time.§.§.§ Across-time scenarioWe aimed to capture the unique characteristics and nuances of each subject's emotional responses and the specific stimuli embedded within the emotional context. Therefore, we trained the model on discrete datasets originating per participant and per video. In this scenario, both training and test sets comprise 240 subsets. The training-test pairs were all collected from the same pool of participants and emotion elicitors that exhibit a one-to-one correspondence. We trained 240 models on each training dataset and subsequently selected the corresponding model to yield predictions on the test sets.§.§.§ Across-subject scenarioThe substantial inter-subject variability in physiological responses to stimuli and the inherent limitations of self-report labels, such as subjectivity, bias, and emotional granularity, presents a considerable challenge in developing one-size-fits-all-subjects effective models for affective computing. To generate plausible predictions, we assume that a single video should elicit similar emotions in most participants. Guided by this assumption, we trained a dedicated model for each video. In this scenario, every fold consists of 24 subjects in the training dataset and six subjects in the test dataset, with all participants having watched the same eight videos. We combined data from different subjects of each video as input when training the model. In total, we trained eight models and employed the respective models for affective state estimation when generating predictions for corresponding test set files.§.§.§ Across-elicitor scenarioIn this scenario, using only the data from the three quadrants available for estimating arousal and valence could compromise our ability to predict affective states associated with the missing quadrant accurately. To mitigate this concern, we performed a meta-analysis of the training data. We first calculated the mean value of user ratings per video file in the training set to categorize videos into the four affect grid quadrants systematically. By doing so, we could make well-founded assumptions regarding each video's quadrant affiliation. Within this scenario, a total of eight videos were provided. By analyzing the composition of the test dataset across four-folds, we categorized the eight videos into four groups: (0, 3), (4, 21), (10, 22), and (16, 20), see <ref>. Among these ratings, it is evident that videos (0, 3) and (16, 20) belong to the high valence high arousal (HV, HA) and low valence high arousal (LV, HA) quadrants, respectively. The categorization of the other two groups is less apparent; therefore, our hypothesis relies on the video with the more prominent rating within each of the two video groups. Consequently, we assumed that video (0, 3) belongs to the (HV, HA) quadrant, (16, 20) belongs to the (LV, HA) quadrant, (10, 22) belongs to the (LV, LA) quadrant, and (4, 21) belongs to the (HV, LA) quadrant. Based on this assumption, we employed only two relevant quadrants to achieve sample balance and maximize the variance along the valence and arousal axes. For instance, when the videos in the test set belong to the (HV, HA) quadrant, we train the valence predictor on the dataset comprising videos from the (LV, LA) and (HV, LA) quadrants, and the arousal predictor on the dataset containing videos from the (LV, HA) and (LV, LA) quadrants. This approach effectively minimizes input bias and ensures more accurate emotional state estimations for the missing quadrant.§.§.§ Across-version scenarioGiven that instances of all the affective quadrants are available, albeit in only one version, we aimed to develop a general model that yields robust results by harnessing collective intelligence by applying late fusion. In other words, we assumed there is a “wisdom of crowds” effect when combining multiple weak classifiers into one. To this aim, we developed four separate models, each trained on a distinct set of four videos from the training dataset. During the testing process, we input the preprocessed and feature-extracted data from the test set into each of these four models to generate predictions. Then we applied a late fusion strategy to obtain the final estimation. The four predictions from each model were fused by calculating their mean predicted rating values.§ VALIDATION RESULTS The models were assessed using the root mean square error (RMSE) metric. A lower RMSE is better. It has the same units as the valence and arousal annotations. The final score for the EPiC challenge for our team was 1.19. Additionally, we report the detailed performance for each scenario on the test set, as reported by the workshop organizers. Our training was done on the full train set to maximize data availability. For each test data, i.e., the data of one subject and one video, the RMSE is calculated for arousal and valence, respectively. Then in each fold, the performance is assessed by averaging the RMSE values among all test data. Similarly, the model performance in each scenario is evaluated by averaging all RMSE values within the scenario. The final RMSE result was obtained by calculating the mean score on all scenarios and two prediction targets, i.e., arousal and valence. The scenarios-level RMSE and folds-level RMSE are shown in <ref>, <ref>, and <ref>. §.§ Across-time scenarioThe RMSE of predicted arousal and valence are 0.91 and 0.95, with a standard deviation of 0.83 and 0.93, respectively. Among the 240 test data, the lowest RMSE of arousal and valence is 0 and 0, and the highest is 3.87 and 4.33. For the predicted results of arousal and valence, the RMSE of 69% and 66% of the test results were below 1. Overall, the prediction error for arousal is slightly lower than that for valence. §.§ Across-subject scenarioThe RMSE for the predicted arousal and valence are 1 and 1.1, respectively, with standard deviations of 0.16 and 0.04 across the five folds.In each fold, the predicted arousal RMSE is consistently lower than the predicted valence RMSE. This difference is especially noticeable in fold 4, where the RMSE and standard deviation for predicted arousal are 0.74 and 0.48, respectively, in contrast to the valence RMSE and standard deviation, which are 1.13 and 0.86, respectively.Despite the seemingly positive results in this scenario, there is a limitation on how we predicted the final ratings. We assumed that training and testing across elicitors would help to predict better the outcomes in the ratings done by other people not seen in the dataset. However, in the real world, we do not have information about the type of stimuli used. Therefore, the performance will probably be reduced, as exemplified in the following section.§.§ Across-elicitor scenarioOur model yielded the highest RMSE values, with the RMSE for arousal and valence being 1.44 and 1.42, respectively, and standard deviations of 0.51 and 0.55. We note that the high RMSE was due to poor prediction results in fold 0, where the RMSE values were twice those observed in other folds, reaching 2.29 and 2.36 for arousal and valence, respectively. By analyzing the data for this scenario, we noticed that a potential cause for the high RMSE lies in the significant deviation between the test data and the training data in fold 0. The training data does not include similar patterns in the test data. As shown in <ref>, the test data in fold 0 contains videos 16 and 20 in the upper left quadrant of the affective grid. The rating patterns of arousal and valence differ from the data in the other three quadrants, which were used as training data. This suggests that the larger variations in ratings characteristic of negative, high-arousing emotions are not present in the other types of emotion. Furthermore, these results also demonstrate the reliance of our model on data similarity, indicating a weaker generalization capability for novel data patterns. §.§ Across-version scenarioIn this scenario, the prediction RMSE using our model is 1.30 for both arousal and valence, with standard deviations of 0.30 and 0.24, respectively. Although each of the two groups' data in this scenario covered all four emotional states, the cross-validation results revealed that our model's RMSE in fold 0 was 0.5 lower than in fold 1. This suggests that an appropriately balanced training set, encompassing all four emotional states, can significantly enhance the model's generalization capabilities.§ REVISITING ASSUMPTIONS§.§ Lag between physiological signals and ratingsWhen preparing the data to train our models, we assumed that the physiological changes happen at different speeds depending on the measurement metrics used. Here we investigate the effect of the time delay in reporting emotions. In particular, we followed the general procedure described by <cit.>. We shifted the features forward in time in steps of 0.005 s up to a maximum of 0.05 s and trained a Gated Recurrent Unit (GRU) model to predict arousal and valence. Note that the annotations were performed at 20 Hz while the physiological signals were sampled at 1000 Hz. Then, we experimented with using each individual signal as input to the model in isolation before combining all signals as input.The analysis results are in <ref>.The results suggest that predictive performance generally improves when accounting for annotation delays. However, the delay yielding the most empirical gains varies for each biosignal, and we often found several minimum values. A further investigation of the timing relationships between physiological change, experience, and annotation is needed to understand when the differences significantly disrupt the predictions. §.§ The gradual nature of changes in emotion Qualitatively, we noticed that the predictions from our models were characterized by more high-frequency changes than the annotations provided in the training data, which change more gradually over time. Consequently, we utilized a moving average window comprising 10 samples, equivalent to a 2 Hz low-pass filter.The choice of the window length follows from the observation that the annotations were provided at 20 Hz, so the Nyquist frequency is 10 Hz – only changes at 10 Hz can be measured with the joystick described in the dataset. Furthermore, we assumed people would not make more than two abrupt changes per second. Further investigation is required to establish whether the nature of emotion changes is a gradual process, or whether the relative smoothness of the ground-truth ratings is an annotation artifact.§.§ Single- and multi-label predictorsConsidering that participants simultaneously rated their emotions for valence (X-axis) and arousal (Y-axis) using a two-dimensional joystick, we speculated on the potential connection between these two values, despite the orthogonal nature of the valence-arousal model's axes. To evaluate this hypothesis, we extracted 24 datasets from scenario 1, each containing data from the combination of six participants and four videos. We compared the following approaches: (1) independent prediction of valence and arousal, (2) predicting valence first and then using both physiological signals and predicted valence values for arousal prediction, and (3) predicting arousal first and subsequently using the predicted arousal values for valence prediction. This comparison investigated any potential connections between valence and arousal predictions. As demonstrated in <ref>, the performance differences among these three strategies are minimal. As a result, owing to the slower training process associated with multi-label predictors compared to single-label cases, we ultimately chose to employ the independent prediction strategy for this competition.§ DISCUSSION AND FUTURE DIRECTIONSWe introduced an attempt to address the EPiC Challenge. We predicted continuous valence and arousal ratings from several biosignals, across four scenarios. We used readily available algorithms, with our novelty being (a) the window choices for feature calculation; (b) the use of data around each annotation point; and (c) our use of theoretical assumptions to maximize data variance in each scenario's training. Our overall RMSE for the four validation scenarios was 1.19, as provided by the competition organizers. This result still has considerable room for improvement compared to previous work on predicting continuous valence and arousal annotations from physiological data, and can be used as a baseline for future regression studies on the CASE dataset.As expected from the literature, fitting a personalized model to predict a single persons's reaction at a future point (across-time) is an easier problem than in the other scenarios. To extend our model to other, unseen people, we used information about the stimuli, and capitalized in our knowledge of the affective context in which the data was collected. By training several models per stimuli, we reduced the RMSE in the across-subject scenario. However, this strategy is unlikely to work in the real world, as we typically would not have information about the stimulus. A similar approach was used in the across-elicitor scenario. By examining the results (<ref>), we hypothesize that high arousal and low valence emotions display abrupt physiological changes not present in other affective quadrants. Future work should explore whether this is consistently true, and devise a method to tackle the lack of information during training. Furthermore, the results of the across-version validation signal that expected affective messages depicted in a stimulus might not produce the same effect in different individuals. Future work should validate if this is the case, and assess if a weighted late fusion provides improvements with respect to an averaging function. This would also validate or refute whether our bet for a “wisdom of a crowd classifier” is suitable. Finally, future work should be done to formally assess if end-to-end methods outperform ensemble methods and feature engineering similar to those used in this work.§ ETHICAL IMPACT STATEMENTThis research was a data analysis of the CASE dataset <cit.>. All data provided is anonymous, and obtained following the Declaration of Helsinki. Our results have several limitations. The sample size is only 30 people, and no mentions of the cultural background of the participants are made in the dataset. The interpretation of the stimuli, and therefore the ratings and physiological changes, might differ from person to person. Therefore, our results need to be replicated in other corpora. Moreover, we built our model based on certain assumptions that are described, but not formally validated. These assumptions might not yield the same performance in other datasets. Finally, our results are biased to better predict the situations in the dataset. Therefore, our model should be used with caution. | http://arxiv.org/abs/2312.16036v1 | {
"authors": [
"Felix Dollack",
"Kiyoshi Kiyokawa",
"Huakun Liu",
"Monica Perusquia-Hernandez",
"Chirag Raman",
"Hideaki Uchiyama",
"Xin Wei"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20231226125357",
"title": "Ensemble Learning to Assess Dynamics of Affective Experience Ratings and Physiological Change"
} |
It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph GenerationAlessandro Palma [email protected] University of Rome Marco Angelini [email protected] Campus University, Rome, ItalySapienza University of Rome, Rome, ItalyJanuary 14, 2024 =========================================================================================================================================================================================================plain plain In modern computer networks where sophisticated cyber attacks occur daily, a timely cyber risk assessment becomes paramount.Attack Graph (AG) represents the best-suited solution to model and analyze multi-step attacks on computer networks, although they suffer from poor scalability due to their combinatorial complexity.This paper introduces an analysis-driven framework for AG generation. It enables real-time attack path analysis before the completion of the AG generation with a quantifiable statistical significance. We further accelerate the AG generation by steering it with the analysis query and supporting a novel workflow in which the analyst can query the system anytime.To show the capabilities of the proposed framework, we perform an extensive quantitative validation and we present a realistic case study on networks of unprecedented size. It demonstrates the advantages of our approach in terms of scalability and fitting to common attack path analyses.Attack Graph, Attack Path Analysis, Progressive Computation, Statistical Significance, Computational Steering.§ INTRODUCTIONIn today's digital world, where attackers are evolving smarter and the number of different vulnerabilities is growing, cyber-attacks are becoming more complex to mitigate and with a higher impact on organizations' infrastructure <cit.>. In this scenario, it is crucial for any organization to promptly assess the potential cyber risks their networks are exposed to.This involves identifying the vulnerabilities in the network's systems and quantifying the risk of their potential exploitation, which generally refers to the impact of one or more exploits and their likelihood to happen <cit.>.Given the growing complexity of attacks, as multi-step attacks <cit.> (i.e., sequential exploits of different vulnerabilities among network hosts), it is essential to model the threats in a network. Among the possible threat models <cit.>, Attack Graph (AG) <cit.> is a graph-based representation of the potential attack paths in a network, used to analyze its cyber risk. For example, an analyst may be interested in determining the most critical hosts based on the number of paths traversing them. These kinds of analyses are named attack path analyses as they involve features of the attack paths <cit.>. Although AGs are very expressive attack models for attack path analysis, the first problem they show is the poorly scalable computation of the attack paths, even for networks of moderate size as AGs may grow exponentially in the size of network hosts and vulnerabilities <cit.>. Currently, providing scalable approaches for attack path analysis is still an open problem, and this hinders the possibility of promptly providing security strategies to harden the network. Different works addressed the problem of scalability of AG generation <cit.> and attack path analysis <cit.>.A second emerging problem is that they postulate a classic analysis workflow in which the generation process step comes first and the analysis comes next, aggravating the poor scalability of the generation process as it slows down or completely stops the analysis phase in realistic scenarios. A final problem concerns the alignment between the current network situation and the related AG model: while the classic workflow permits the attack path analysis on the complete set of attack steps, it also implies that any change in the network needs a re-computation of the whole attack paths set to reflect such changes, exacerbating the scalability problem. These problems make it challenging to use AGs for today's dynamic and large-scale networks.To overcome them and provide affordable analysis, we contribute a novel workflow for AG generation and attack path analysis. It leverages the concept of statistical significance <cit.> to express the degree of trustfulness of a partial AG, where not all the attack steps are computed. According to the foundational aspects of progressive data analysis <cit.>, it progressively supports attack path analysis during the AG generation, allowing preventive analyses with quantitative quality-controlled statistically significant results. To improve the accuracy of analysis performed on partial AGs, we design a steering approach which automatically accelerates the AG generation based on the requested attack path analysis. Given a path analysis query from an analyst, it models the queried attack path features, extracts from them the features of each single attack step, and uses them to prioritize the generation of attack paths that answer the analysis query.We evaluate the speed-up of this process through a comprehensive scalability analysis performed on hundreds of experiments, varying both the network size and vulnerability inventories. Finally, we present a realistic case study that shows the capabilities of the approach in networks whose size cannot be managed by state-of-the-art solutions.Summarizing, this paper contributes: i) a novel framework for AG generation and attack path analysis, based on progressive data analysis; ii) a way to express statistical significance over partial AGs; iii) a novel approach to steer the AG generation based on attack path analysis queries; iv) a comprehensive validation through experimental evaluations for both statistically significant AG generation and the steering mechanism;v) a realistic case study demonstrating the possibility of managing large networks and conducting common attack path analyses.§ PRELIMINARIESIn this section, we present the fundamental notions of attack graphs and attack path analysis.In particular, we define their components and introduce the risk model used for the attack path analysis. §.§ Attack Graph Model DefinitionAn Attack Graph (AG) is a graph-based representation of the possible paths an attacker can exploit to intrude and compromise an ICT network or, more generally, an ICT system.To generate it, two inputs are necessary <cit.>: (i) the network reachability graph (or network reachability matrix), and (ii) the network vulnerability inventory. A network reachability graph is a directed graph R=(H,E) where nodes are associated with network hosts h_1, ⋯, h_n ∈ H and edges represent reachability conditions between hosts (taking into account firewall and routing rules). Thus, an edge e(h_1,h_2) ∈ E indicates that host h_1 can communicate with host h_2 through a direct link.A vulnerability inventory reports the list of network hosts with an associated set of vulnerabilities, typically obtained by combining the vulnerability knowledge bases, as the National Vulnerability Database (NVD) <cit.> by NIST <cit.>, with vulnerability scanners (e.g., Nessus <cit.>).With these inputs, AG modeling consists of the representation of all the dependencies between hosts and vulnerabilities that can be exploited by an attacker. Thus, the first step to model AGs is defining the semantics of attack nodes and edges to represent the structure of the attack steps.In this paper, we leverage the Topology Vulnerability Analysis (TVA) attack graph model <cit.>, where the nodes are security conditions and represent the attacker access privileges in a specific host, while the edges are exploit dependencies and represent the possible movement of the attacker in case of successful vulnerability exploit. More formally: An Attack Graph G=(V,E) is a directed multi-graph in which V is the set of security condition nodes and E is the set of labeled edges where an edge e=(v_1,v_2,u) ∈ E indicates that the attacker can move from condition v_1 to condition v_2 by successfully exploiting vulnerability u. An example of an AG is reported in Appendix <ref>. Once the AG model is defined, the next step is the Attack Graph Generation, which is the computation of the attack paths to analyze the cyber risks of the potential attacks. Attack paths represent sequences of compromised devices and vulnerabilities exploited during the attack, and are used to compute scores or metrics like the likelihood of success of an attack, its impact, or the difficulty of execution <cit.>. An Attack Path AP=⟨ v_1,u_1,2,v_2,u_2,3, ⋯ v_n ⟩ is the ordered sequence of attacker states v_1, ⋯, v_n, interleaved by the sequence of vulnerabilities u_1,2, ⋯, u_n-1,n which exploit allows an attacker to move between consecutive states.When generation ends, the resulting attack paths are used to estimate the risks of the different multi-step attacks, each one corresponding to a different path in the AG. The attack path risks may be used either to evaluate the overall exposure of the network (e.g., by aggregating the risks of the different paths to have a comprehensive risk evaluation) or to perform analyses according to specific requirements (e.g., analyzing which are the attacks that may cause more damages to the network). §.§ Risk Model DefinitionTo evaluate the cyber risks of an attack path, we leverage the risk methodology designed by Gonzalez-Granadillo et al. <cit.>. They consider CVSS standard metrics <cit.> to estimate the likelihood and impact of an attack path AP, and calculate the risk according to its standard definition, that is the dot product of likelihood and impact <cit.>.risk(AP) = likelihood(AP) · impact(AP) In particular, the likelihood is calculated using the CVSS-3.1 exploitability metrics, which are Attack Vector (AV), Attack Complexity (AC), Privilege Required (PR), and User Interaction (UI). The impact is determined by the last node in the path, as it represents the attacker's final goal, and it considers the CVSS-3.1 impact metrics, that are Confidentiality (C), Integrity (I), and Availability (A). Likelihood, impact, and risk are all in the range [0,1]. More details about the risk model are available in Appendix <ref>. According to the defined risk model, the risks of the different attack paths inform the analysis process and consequent decisions on the mitigation strategies. For this reason, we evaluate AG expressiveness through the distribution of the likelihood and impact of its paths. Let AP be an attack path and let u_1,2, ⋯∈ U_AP the sequence of its vulnerabilities. We refer to vulnerability features as the CVSS metrics of a single vulnerability u_i,i+1∈ U_AP. Contrary, we refer to attack path features as the risk metrics for the attack path AP (for the defined risk model they are likelihood and impact). With this definition, we refer to attack path analysis as the answer to queries on attack path features. More formally: An attack path Query Q is a query that specifies a range of values for one or more attack path features, potentially in conjunction or disjunction, to express a human analyst or an automatic process information need. For example, the query Q={impact ≥ 0.9 ∧ likelihood<0.5} represents the need to look at not probable (<0.5) but very dangerous (≥0.9) potential attacks.Since generating an AG (i.e., enumerating all its paths) and analyzing attack paths is computationally expensive, we introduce a scalable framework in which the analysis can be performed before the completion of AG generation, and the generation can be steered by the analysis queries. § OVERVIEW OF OUR APPROACH According to the classic process of attack path analysis, first defined by Phillips and Swiler <cit.> and currently used <cit.>, it is composed of two principal milestones. Fig. <ref>-(a) depicts this classic workflow. We can recognize the first milestone that is the AG generation and the second one is the analysis of all its attack paths. The foundation of this framework is the representation of all the possible attack steps to provide a comprehensive view of the potential attacker's movements on the network. Once the AG is generated, the subsequent step is the analysis of its paths, to study the attacks that can be put in place by an adversary. The problem of enumerating the attack paths is combinatorial in the number of edges, determining one of the main sources of their poor scalability. As a consequence, in the case of medium-large networks, the attack path analysis may never be performed because of this complexity. In this paper, we propose a new framework for AG generation and analysis that leverages the fundamental concepts of progressive data analysis <cit.>, that we report in Fig. <ref>-(b). The main idea behind progressive data analysis is to produce partial results during execution with increasing accuracy to generate intermediary outputs while the data is still being processed, potentially targeting interactive times. In the scenario of AG generation and analysis, the framework we propose considers the computation of early partial results that would give a coarse-grained approximation of the complete AG.At a certain instant t_significant (driven by the amount of processed data), the partial AG becomes statistically significant (SPAG), in the sense that it is a good statistical approximation of the complete AG. Statistical significance must take into account the AG components that may vary in different networks or even that may change over time. For this reason, it must be evaluated on different attacker conditions (privileges on hosts) and vulnerability distribution. From this moment on, attack path analysis generates mature partial results, which reflect the final outcome within an acceptable margin of error, quantifiable through statistical indicators. This is the first advancement of the proposed framework, that can produce meaningful results to start exploring the attack paths at interactive time and continuously, allowing first observations and exploration without incurring the cost and waiting time of the full generation.As the progressive AG generation continues to run, there will be a certain instant t_stable when the statistical significance reaches a threshold that guarantees definitive partial results. In these cases, the attack path analysis produces results that will no longer change substantially from the exact and, as such, supporting tasks like confirmatory analysis. This final part of the workflow runs until the complete AG is generated and/or attack paths are analyzed. At the same time, the approach hints at the possibility of leaving out this part, saving computational resources due to the diminishing return of the generation. This way of generating AGs allows the analysis of attack paths with preliminary data informed by quantifiable statistical indicators. However, given the exponential size of AGs with respect to the network and vulnerability inventory <cit.>, it may still require a long time to reach the stability of definitive partial results. For this reason, the proposed framework contributes a steering mechanism to accelerate the convergence of the AG generation to the final outcome of any analysis executed during the generation <cit.>. This mechanism consists of informing the next step of generation with the current attack path analysis.For example, if the analysis query asks for the attack paths with the highest risk, then the steering mechanism prioritizes the generation of those paths that have high-risk values. It corresponds to the red blocks on the bottom of Fig. <ref>-(b).To automatically steer the generation with the analysis query we cannot directly query the AG generator, as it is not driven by attack path features. We leverage Machine Learning (ML) models to learn the vulnerability features from the structure of the attack paths and consequently prioritize the generation by selecting the vulnerability according to the learned features. For this reason, the first phase of AG generation must be agnostic to the attack path analysis (white blocks in Fig. <ref>-b) because it is used for collecting the initial balanced small set of attack paths that will be used to train the ML modeland label attack paths as answers to the query or not. Consequently, the ML model is trained with the collected labeled attack paths to predict the vulnerability features that are representative only of attack paths that answer the query positively. Having obtained the intervals of vulnerability features that correspond to attack paths answering the query, they are used as input to the AG generator to prioritize the next generation of attack paths.This steering mechanism allows a faster convergence of the partial AG to the portion of the complete AG that answers the attack path analysis query. In the next two sections, we describe the details of the statistically significant AG generation and the steering mechanism, along with the evaluation activities executed to demonstrate their validity.§ STATAG: STATISTICALLY SIGNIFICANT ATTACK GRAPH GENERATIONIn this section, we describe how to achieve a progressively convergent statistically significant AG generation. It enables a new way of managing AGs with the main advantage of allowing the exploration and analysis of attack paths in partial AGs, without waiting for the complete generation, potentially very long due to its combinatorial complexity <cit.>.The core idea of this approach, named Statistically Significant Attack Graph Generation (StatAG), is considering partial AGs being aware of statistical significance that becomes gradually higher until it reaches its maximum value corresponding to the complete AG (i.e., the exact result). Fig. <ref> reports the workflow of StatAG.The main activity during AG generation is the enumeration of its attack paths.While existing approaches <cit.> leverage common algorithms for graph traversal, i.e., BFS or DFS <cit.>, we generate attack paths using random walks <cit.> over the reachability graph. A random walk of length k is a stochastic process with random variables X_1,⋯,X_k such that, given a graph G=(V,E), X_i+1 is a vertex v ∈ V chosen uniformly at random from the neighbors of X_i. The rationale for choosing random walks to visit the graph is due to their capabilities of generating unbiased samples <cit.>, contrary to the classic traversal algorithms, which depend on the chosen starting node. As a consequence, random walks facilitate a quicker convergence of the path feature distributions of the partial AG to the complete AG <cit.> as we have experimentally proven in Appendix <ref>. Following the sampling process, the next step involves constructing the corresponding attack path. Let us remember that the modeled AG is a multi-graph, therefore multiple edges (i.e., vulnerabilities) may exist between two nodes, although only one of the multi-edges can be part of an attack path, according to its definition (see Definition <ref>). When this happens, the construction of the attack path involves randomly selecting one vulnerability uniformly at random, so that the total number of attack paths aligns with the number of sampled walks.The collection of attack paths constructed from the sampled walks defines what we name the partial AG. Let G=(V,E) be a complete Attack Graph, then a Partial Attack Graph PAG_i=(V_i,E_i) is an attack graph composed of a subset of nodes V_i ⊂ V and edges E_i ⊂ E retrieved after i iterations of random walk sampling. At this point, the approach iterates toward the generation of other partial AGs that are progressively added to reconstruct the complete AG, eventually. At the end of each iteration, we evaluate the statistical significance of the partial AG that quantifies the degree of its approximation to the complete AG.If it is acceptable, then the partial AG can be used for the initial exploration of the AG, along with indicators communicating its degree of uncertainty. To measure the statistical significance we need to define the null hypothesis H_0, which is the claim that no relationship exists between two sets of data being analyzed <cit.>. In the context of the proposed approach, the two sets of data are the attack path features of the partial AG and the complete AG, and the statistical significance refers to the probability of rejecting the null hypothesis.To evaluate this probability, we need to measure the probability p of obtaining test results at least as extreme as the observed results (namely p-value) and the probability α of rejecting the null hypothesis (namely significance level <cit.>). According to these definitions, the partial AG is statistically significant when the p ≤α <cit.>, with the value of α commonly set to 0.05 <cit.>.We evaluate the null hypothesis through the Kolmogorov-Smirnov (KS) test <cit.>, which is a statistical test to compare two data distributions and quantify the distance between them. In particular, we are interested in evaluating the distance between the attack path feature distributions of the complete and partial AGs. Let 𝒟_AG(x) and 𝒟_PAG(x) be the distribution of the attack path feature x according to the complete and partial AG, respectively. Then, the KS distance between the two distributions is: KS(x) = sup_x|𝒟_AG(x) - 𝒟_PAG(x) |, where sup_x is the supremum function <cit.>, which corresponds to the least upper bound of the distances between the two distributions of x. Using the KS distance as defined in Equation <ref>, we can finally define the statistical significance of partial AGs. Let AG be the complete attack graph and let PAG_i be the partial attack graph generated at the i^th iteration.Let H_0: KS(x) > T be the null hypothesis where KS(x) is the KS distance between the distributions of the attack path feature x for the complete and partial attack graph, and T a predefined distance threshold.Then, PAG_i is statistically significant for the attack path feature x if the p ≤α, with p the p-value of the KS(x) and α the significance level.Let us note that this definition is appropriate only for a posteriori evaluation of the statistical significance because it requires the complete attack graph AG. To enable the estimation of the statistical significance in a real-time application, we need a way to provide its indication during the progressive execution of the approach. For this purpose, we introduce the concept of Attack Path Feature stability (or simply stability) to quantify the variability of the partial results of an attack path feature gathered during the different iterations. To measure the stability of an attack path feature x, we consider the KS distance between the cumulative distribution of x until the i^th iteration and the one at iteration i+1. It indicates how much the new samples of iteration i+1 vary the already sampled distribution, with the rationale that a higher KS distance corresponds to more unstable results given the higher variability of the distribution. Let PAG_i be the partial attack graph of the i^th iteration of the approach and let 𝒟_PAG_i(x) be the cumulative distribution of the attack path feature x in PAG_i. Then, the stability Δ of x over PAG_i is: Δ_PAG_i(x) = 1 - |𝒟_PAG_i-1(x) - 𝒟_PAG_i(x) |Given that the KS distances are defined in the range [0,1], we subtract 1 from the difference of the distances to express the rationale that a higher stability value corresponds to a more significant partial AG. In the rest of this section, we validate the theoretical formulation of statistical significance and its correlation with the attack path feature stability. §.§ StatAG ValidationTo validate StatAG we first present the experimental setting, andthen we analyze the convergence of vulnerability features, attack path features, and stability distributions.§.§.§ Experimental SettingWe validate StatAG with an experimental setup designed for approaching the different factors that affect the AG scalability <cit.>. It consists of synthetic networks and vulnerability inventories, in which we vary the number of hosts, vulnerabilities, network topology, and composition of vulnerabilities per host. Looking at the former, we considered configurations with up to 20 hosts and 50 vulnerabilities per host, which is the biggest configuration possible to compute all the paths (complete AG, used as ground truth) in a reasonable amount of time (in the order of hundreds of hours) <cit.>. We then tested 3 network topologies, including mesh <cit.>, random <cit.>, and power law <cit.>. We considered 5 levels of heterogeneity of vulnerabilities, varying from 0% (i.e., all vulnerabilities among the hosts are the same) to 100% (i.e., all vulnerabilities among the hosts are different) to increase the variability of vulnerability features. Thus, for each network, we evaluated 15 different configurations, which we varied in 100 statistical variations, for a total of 1500 experiments.We use Python and the scikit-learn machine learning library <cit.> for the implementation and we run the experiments on a Linux server with Intel(R) Xeon(R) Gold 6248 CPU 2.50GHz and 256 GB memory. All the materials, as well as the source code of all components including the code used for the experimental evaluation, are publicly available[https://github.com/Ale96Pa/attack_graph_progressivehttps://github.com/Ale96Pa/attack_graph_progressive].§.§.§ ResultsWe validate StatAG by studying the convergence of the partial AGs to the complete AG. For the sake of simplicity, in the rest of this section, we refer to the complete AG as ground truth (GT) to indicate that it corresponds to all the paths generated at the end of the classic generation process.The first validation step concerns the convergence of the vulnerability feature distributions of the partial AGs to the GT. Fig. <ref> reports the median KS distances during the entire execution of the approach. The convergence speed of the vulnerability feature distributions depends on the distance threshold T (see Definition <ref>).For example, considering a good approximation as T=0.1 (corresponding to just 10% approximation of the GT), 9 of 10 features are statistically significant after only 100 iterations, except for the availability feature that has a KS distance of 0.2 after 100 iterations. In particular, we can notice 3 different trends in all the plots in Fig. <ref>. The first one coincides with early partial results, where the KS distance from the GT is too high to perform analysis with a reasonable approximation but good for initial exploration. This trend lasts about the first 100 iterations. After 100 iterations, corresponding to 9% of the complete execution, the trend is flat until 1700 iterations on average, corresponding to 77% of the complete execution. This is the interval where mature partial results appear, that provide a 10% approximation of the GT for most vulnerability features, with the worst performance reached by the availability with a 20% approximation. Finally, in the last 23% of the execution, the KS distance becomes very close to 0, corresponding to the definitive partial results. The duration of mature partial results, in contrast to early and definitive ones, can be attributed to the overall limited variability in vulnerability features according to the CVSS metrics, as documented by NIST <cit.>. As a consequence, the vulnerability variability is covered very quickly in just 9% of the total number of iterations. The convergence trend of the vulnerability features to the GT shows the capability of the progressive approach to represent 90% of the vulnerability inventory after a few iterations. It is a good result for the analysis of the vulnerability inventory, but we need further investigation that concerns the attack path analysis.To this aim, the next validation step concerns the convergence trend of the attack path features of the partial AGs to the GT ones. Fig. <ref> reports the distribution trend of the KS distances between the partial AGs and the GT for the attack path features, i.e., likelihood and impact.The trends in Fig. <ref> show a gradual convergence for both likelihood and impact with the largest distance equal to 0.2 in the first 100 iterations, where the variability of the data is also higher (i.e., the box size of the boxplots).The KS distance distribution becomes less variable and with lower median values after about 100 iterations (9% of the execution) and until 1500 iterations (68% of the execution). It is the case of mature partial results, that approximate the attack path features distribution from 15% to 5% (i.e., KS distances from 0.15 to 0.05). After 1500 iterations and until the end, we can recognize mature partial results with a KS distance very close to 0. In summary, attack path features can be queried after a few iterations with a 20% approximation, gradually decreasing reaching a 5% approximation after 1500 iterations, with a trend similar to vulnerability features (see Fig. <ref>). In the final validation step, we furnish evidence demonstrating that the attack path feature stability (Equation <ref>) accurately reflects the trend in statistical significance so that it can be used as a real-time indicator of the convergence to the GT. To this aim, Fig. <ref> reports the trend of the stability for the performed experiments, where the trend of the median values is represented with full-color hue, while the alpha blended areas identify the variations between the upper and lower quartile values to report the statistical variability.The stability trend is definitely coherent with the KS distance trend of Fig. <ref>: during the first 100 iterations (9% of the execution), the stability indicates early partial results as well as the KS distance with the GT, with stability values lower than 0.85. Between iterations 100 and 1000 (46% of the execution), the stability reports an intermittent trend between 0.85 and 0.95, indicating the mature partial results, where the AG is assisting its statistical significance. The same trend corresponds to the KS distance from the GT. Finally, we can identify the definitive partial results in the last 54% of iterations, where the stability is very close to 1 and, correspondingly, the KS distance with the GT is very close to 0. In conclusion, the analysis of the stability indicates that it is a good predictor of the statistical significance of the partial AGs, with the great advantage of being a metric that can be computed in real-time for each iteration of the process.The StatAG validation showed the ability of the approach to generate the attack paths progressively, with three thresholds of significance (early, mature, and definitive) that inform about the convergence of the partial AG to the GT. This progressive approach represents an innovation in the Attack Graph community, where traditional solutions typically focus on the complete generation of AGs.However, when it comes to analyzing the complete AG for attack path analysis, this approach still requires enumerating all paths. To expedite convergence towards the subset of the complete AG that addresses a particular attack path query, the next section introduces a steering mechanism designed to guide both the generation and analysis processes.§ STEERAG: STEERED ATTACK GRAPH GENERATION AND ANALYSISStatAG allows us to consider an approximation of the complete AG before the actual completion of its generation at every iteration of the process. This only partially addresses the original problem of the poor scalability of AG: while the convergence rate is the same as the random walk sampling, and different quality of results can be used for different analyses (e.g., exploration, confirmation), this may still require a long time for an analysis query to get the exact result. While for certain analyses this would not be needed, being sufficient working with a suitable approximation, for others this characteristic would not be ideal. For this reason, we contribute a further improvement of the framework, central in its capability to merge analysis and generation, and let the second be guided by the first, consisting of accelerating the attack path analysis for dynamic and real-time environments through the design of a steering mechanism, reported in Fig. <ref>. We name this approach Steered Attack Graph Generation and Analysis (SteerAG).According to the general framework (see Fig. <ref>), the steering process is activated with the presence of attack paths generated using StatAG and an attack path query for the analysis, issued by a human user or an automatic process. The first step of the steering approach is the path features training, which consists of two activities. First, we label the attack paths coming from the StatAG as “relevant” when their attack path features correspond to the query requirements and “not relevant” otherwise. As this operation is linear in the number of attack paths generated up to the issuing of an analysis query in the worst case, it does not represent a costly operation.When a sufficient number of attack paths are labeled, with this number depending on the ML model used (i.e., ten to fifty for decision trees, around 100 for neural networks) the second activity is the actual training of a binary ML-based classification model, where the attack paths are classified according to their assigned label but the classification is based on the vulnerability features of the paths and not on characteristics of the whole path. In this way, we aim to discover the latent relations existing between all the relevant attack paths and their vulnerability characteristics based on the ones available. Being capable of extracting this relation allows for steering the AG generator by the discovered intervals of vulnerability features.In our proposal, we use the decision tree classifier <cit.> as the ML model for its properties that fit well the steering approach. Indeed, decision trees(i) can be trained quickly and therefore are suitable for progressive data analysis <cit.>,(ii) produce sufficiently powerful classification models from relatively small inputs, allowing its use early on during the progression <cit.>, and(iii) their structure can be easily translated into steering rules which represent the features of the relevant data <cit.>. Once the decision tree has been trained on the vulnerability features, the goal of the vulnerability features extraction phase is to retrieve the rules to steer the next StatAG generation from the structure of the trained decision tree. We name them steering rules and must consider the values of the vulnerability features that generated only the “relevant” paths class.To do so, we consider that the decision tree is built by using the vulnerability features as split points and a leaf of the tree identifies whether the vulnerability features from the root to the leaf generate “relevant” paths or not. This makes it possible to compute the inverse mapping from the “relevant” leaves to the root, keeping track of the vulnerability features values that they and only they have in common. We can build the set of steering rules considering that each path from the root to a “relevant” leaf node represents a conjunction of decision rules that must be satisfied in order for the decision tree to classify it as relevant. Consequently, the logical disjunction of all such decision rules generates the set of steering rules. Depending on the depth of the decision tree, we could have many steering rules, where their union creates latent relations between relevant attack paths and vulnerability features. In the next step of the approach, we use the steering rules to consider only the vulnerabilities that match them before sampling the graph again with random walks as the StatAG generator mandates. In this way, the newly generated attack paths have a high probability of matching the “relevant” class for the attack path query given in input, thus accelerating the progression of the AG generation towards attack paths answering the query. Let us note that while the system is running, it continues to use the latest retrieved steering rules until there is a slowdown in the precision of the generation process, indicating that the relevant attack paths have been exhausted. To check that this effectively means having reached the exact answer to the query, another training is launched to adjust the steering with new rules that take into account the cumulative collection of attack paths, effectively repeating the described process until it converges to the exact result (i.e., a new training does not return any additional attack paths). At each iteration of SteerAG, preliminary attack path analysis can be performed to answer the query, until all paths are generated. The process is even faster in converging, guaranteeing an analysis results identical in quality to the one conducted on the fully generated AG.In the rest of this section, we validate SteerAG, highlighting its performances by comparing it to the case in which it is not used. §.§ SteerAG ValidationFor the validation of SteerAG, we use the setting configurations used for StatAG and described in Section <ref>, with the addition of another experimental parameter that is a set of 100 attack path queries obtained with different combinations of attack path features in the query and varying their ranges from 0 to 1 with steps of 0.1. This resulted in a total number of 15,000 experiments.The main evaluation metrics for the steering approach are the recall and the precision. The former, measured as relevant_paths/all_paths, informs about the convergence rate of the partial AG to the GT, while the latter, evaluated as relevant_paths/retrieved_paths, informs about the ability of the decision tree model to correctly retrieve the attack paths that are relevant for the query at every iteration.Fig. <ref> reports the recall and precision of the performed experiments. The median values trend for each curve is reported with full-color hue, while the alpha blended areas identify the variations between the upper and lower quartile values. In this way, the statistical variability is reported for all curves.From the recall trend (Fig. <ref>) we can observe the very good performance of SteerAG that reaches values very close to 1 after just 250 iterations, with the activation of the steering mechanism starting at 100 iterations on average. In particular, we can recognize two thresholds during SteerAG: between iteration 100 and 150 we observe early partial results (recall lower than 0.5), useful for exploration; from iteration 200 until iteration 250 (11% of the execution) the results are mature, supporting early decision making; finally,from iteration 250 on we observe definitive partial results, with a median recall of more than 0.95 indicating that 95% of the relevant paths are retrieved. It is interesting to note that without the presence of the steering approach, the convergence to the GT (more precisely, to the portion of the GT relevant to the query) is very slow: at the definitive partial results threshold (iteration 250), random sampling has achieved a recall value of 0.2. Comparing the two trends, we can conclude that the steering approach accelerates the convergence to the GT saving around 90% iterations, providing a great advantage for the timely analysis of the attack paths. These performances are achieved consistently in all the experiments tested. While the recall describes the ability of SteerAG to rapidly converge to the GT, the precision is used as a real-time indicator of the quality of each partial result. Fig. <ref> shows the median precision trend during the execution of the framework.We can observe that the precision values are consistently high after the activation of the steering mechanism at iteration 100, presenting median values that reach peaks of 0.85. Similarly to the recall trend, precision presents a rapid growth in the early iterations, in particular during the generation of early partial results; this shows that the analyst can expect a fast transition from early to mature partial results, due to the high precision values. At iteration 500 (23% of the execution) it slows down. After 700 iterations (at 55% of the execution), there is another peak of precision up to 0.2, and this can also be observed in the recall trend in Fig. <ref>. It is due to the retraining of the decision tree to adjust the steering rules after the detection of the precision breakdown. The fact that the precision has lower median values in the second steering activation than in the first one can be explained by a lower residual probability of finding the few relevant attack paths left (less than 2% at that stage).In contrast, StatAG without the steering mechanism has a constant trend of low precision values because, as expected, the probability of uniformly randomly picking relevant attack paths to the issued query is much lower. Overall, the validation of recall and precision demonstrate how the SteerAG accelerates the AG generation and analysis.While it gives the statistical variability of the trends, it does not allow us to investigate deeply how the characteristics of the attack path queries affect the approach performance. In particular, we are interested in evaluating the effect that the stringency of a query has on the performances, to avoid cases in which a stringent query, set to retrieve a low number of attack paths very specific, is indefinitely running due to slow convergence.To validate this, we define three ranges of queries: queries with low range when the values of the attack path features are in a range of at most 0.2, medium range when it is between 0.3 and 0.5, and high range when it is over 0.5. The rationale is that low queries require very specific characteristics of vulnerabilities with narrow intervals, and so are more specific. Large queries retrieve a large number of attack paths, defining not stringent vulnerability values in large intervals. Medium queries are in the middle.We report different analyses in Fig. <ref>.Fig. <ref> shows the recall trend of the different experiments for each query range. It highlights three clearly different trends that for the low, medium, and high ranges are respectively the fastest, medium, and slowest convergence to the GT. To quantify the speed of the convergence to the GT, in Fig. <ref> we plot the recall values at the point of the maximum convergence of the recall curves. In practice, it corresponds to the peak of the recall curve before its stability. If the maximum convergence rate is low (< 0.2) for high recall values (> 0.8), then it means that the approach converges slowly to the GT. It is the case of 100% of the experiments with high query ranges, 54% of the experiments with medium ranges, and 25% of the ones with low query ranges.In contrast, for 46% of the experiments with medium ranges and 75% of the ones with low ranges, the convergence rate is high (>0.3) when the recall is higher than 0.8, indicating a very quick convergence to the GT.In particular, 25% of the experiments with low ranges reach the maximum convergence rate (between 0.2 and 0.3) when the recall is around 0.8, indicating a slower convergence with respect to the other low-range experiments. Finally, in Fig. <ref> we study the number of iterations remaining from the point of maximum convergence until all relevant paths are retrieved (i.e., when the recall is 1). We observe two main portions: the left one indicates the experiments that are closer to the recall value 1 and it is composed of all the experiments with low range and 70% of the ones with medium range. The right one indicates the cases that require more iterations to reach recall 1 and includes all the experiments with the high range and 30% of the ones with the medium range. The results for low and medium query ranges confirm the capability of SteerAG to retrieve fast attack paths even for stringent queries correctly. The results for the high-range queries seem counter-intuitive to be not as good as the others. In fact, this is not true, as an investigation of the trend shows that it depends only on the very high quantity of attack paths to retrieve, related to the capacity to process attack paths at each iteration (i.e., the sampling size for each iteration) than to a difficulty of the approach in retrieving correct attack paths. Looking at the precision values, they are consistently high throughout the whole process, and it is worth noting that the maximum convergence rate is always reached with high recall values (>0.8), even for high-range queries.Overall, we do not observe an effect of the stringency of the query on the performance, with the only parameter to set as the sampling size in order to speed up the high-range queries.§ CASE STUDY EVALUATION In this section, we illustrate a case study to show the capabilities of the approach to be applied in a real setting. We first show how the approach can be used in real-time with a network bigger than the ones typically analyzed by state-of-the-art solutions allowing higher scalability. Next, we provide a systematization of attack path analyses retrieved from the literature and show how the approach is capable of supporting them. §.§ Application to large networksThe first application we prove deals with the scalability of the networks. In Table <ref> we report the works of the literature that address the generation and analysis of attack paths, highlighting their scalability in terms of the network parameters. In particular, we consider the maximum number of hosts, vulnerability per host, and whether they generate all paths or not. In this last case, the approaches define heuristics to prioritize the paths under analysis and avoid the full generation (more details in Section <ref>).From Table <ref> we can observe that the largest experimental settings include either many hosts (50) and few vulnerabilities (5), or a medium number of hosts (15) and vulnerabilities (12 vulnerabilities per host). For the validation of our approach, we already considered networks bigger than those (see Section <ref>), but here we want to report an even bigger case study with 100 hosts and 100 vulnerabilities per host, all different from each other to prove our approach provides a big improvement over the state-of-the-art. Concerning the attack path query and for the sake of example, we consider the most common risk analysis that requires attack paths with the highest risk, therefore with both likelihood and impact between 0.9 and 1, as the most risky situations. We prove later the support of a wide range of analysis queries. Let us note that the generation of the complete AG is intractable for such a size with the classic full generation, but we show that the progressive generation allows having partial results even in this case, through the real-time indicators that are the stability (for StatAG) and precision (for SteerAG) controllable in real-time by an analyst or an automatic process.From the stability trend of Fig. <ref> it is evident that until iteration 100 the results are early partial, with very low and fluctuant stability indicating a still high approximation of the complete AG. The analyst can no matter what start exploring in this phase, even by iteration 1, and eventually query the system for her information needs. Contrary, from iteration 100 s/he starts observing high stability values both for likelihood and impact, thus can hypothesize that it is the time of significant partial results, and support tasks like hypotheses formulation. In a real scenario, it is reasonable to observe a constant trend of the stability values before considering the results statistically significant. From the observation of Fig. <ref>, a reasonable time to consider mature partial results is at iteration 300, where both likelihood and impact show a regular median stability of 0.85, progressively increasing. This phase supports hypothesis testing, comparative or confirmatory analyses, and early decision-making. From this time, the attack path analysis can start with the consideration that it approximates at least 85% of the complete AG.In the meanwhile, the analyst observes the precision trend of Fig. <ref>. S/he notices a rapid growth at iteration 100, indicating that the SteerAG has been activated. This observation is in line with the stability trend (Fig. <ref>), thus indicating the starting point of mature partial results. The precision has an average value of 0.8 until around iteration 8300. During this long period, the analyst can perform attack path analysis, considering that new paths are incoming to answer the query and, therefore, s/he should progressively monitor the new results. When the precision is stable for a longer period (i.e., around iteration 2000), s/he can consider that most of the attack paths are included in the query and, from this moment on, assumes that the analysis is composed of definitive partial results (i.e., other incoming paths do not significantly change the attack path feature distributions. Finally, when the precision breaks down around iteration 8300, the results are very close to the exact ones. Let us note, that this observation agrees with the stability trend (Fig. <ref>), indicating from iteration 1000, a stability very close to 1, until the end of the execution. From this iteration on, the analyst can produce the final outcomes of the analysis and proceed with her work.To give an idea of the advantages provided by the proposed framework,we identify two potential targets: a human analyst and an automatic analysis process. The former takes advantage of the progressive generation to perform early decision-making in interactive times. In fact, each iteration has an average duration between 0.5 and 5 seconds, and after a few minutes (≈ 5) from the beginning of execution, s/he can start performing analysis with mature partial results. The latter has the advantage of progressive results feeding, without the need for interactive response time but instead based on results accuracy thresholds: the attack path analysis reaches definitive results after a few hours (≈ 12) making it computable in contrast to classic approaches that do not support it. §.§ Coverage of Attack Path Analyses To conclude the presentation of the case study, we analyze how the proposed approach can analyze a selected set of queries with the configuration settings used for the validation (Section <ref>). The attack path queries are extracted from state-of-the-art works addressing the systematization of attack path analysis <cit.>, and they are the following:Q1: Evaluate the risk on the shortest attack paths <cit.>;Q2: Retrieve the attack paths with maximum impact (i.e., impact = 1) <cit.>;Q3: Retrieve the attack paths with maximum likelihood (i.e., likelihood = 1) <cit.>Q4: Retrieve the attack paths with maximum risk (i.e., likelihood · impact = 1) <cit.>;Q5: Retrieve the attack paths corresponding to black swan attacks, i.e., those ones with very high impacts (impact > 0.9), but very unlikely to happen (likelihood < 0.3) <cit.>;Q6: Retrieve the attack paths corresponding to gray swan attacks, i.e., those ones with very low impacts (impact < 0.3), but highly probable (likelihood > 0.9) <cit.>;Q7: Prioritize the attack paths by risk <cit.>. Let us note that this query corresponds to an ensemble of queries, retrieving first the paths with risk 1, then between 0.9 and 1, and so on. To avoid the complete enumeration of the paths, we stop the approach once we retrieve the paths with risk values higher than 0.5. Fig. <ref> reports the recall median trend for the analysis of these queries, where each experiment is repeated 100 times.The recall trends in Fig. <ref> show that most of the queries (specifically, Q1-Q5 and Q7) converge very rapidly to the GT, indicating the concrete support that the approach can provide during attack path analysis. Among them, Q2, Q3, and Q4 ask for precise values of attack path features, while Q1, Q5, and Q7 ask for more complex queries, requiring more than one traversal in the same AG. In particular query Q6 for gray swan attacks has the worst performance requiring 3500 median iterations to retrieve all the paths. The reason for its worse performance may be the need to perform more traversals to answer the query, which includes many paths. Let us remark that it still provides approximate answers but with a slower convergence rate.This case study showed the capabilities of the approach to perform attack path analysis with networks' size that existing solutions cannot address and according to state-of-the-art attack path analysis. § RELATED WORKSeveral works leverage Attack Graphs (AGs) for analyzing cyber risks of computer networks <cit.>, which underline their expressiveness and high computational complexity. Our framework intersects two main areas of the related work: AG generation and attack path analysis.§.§ Attack Graph GenerationTo address the scalability issue of AGs, the research focuses on the design of approaches to generate AGs. One possible solution consists of leveraging distributed and parallel computing. The distributed approaches, as Kaynar et al. <cit.> and Sabur et al. <cit.>, partition the network based on its services and vulnerabilities and assign the different partitions among distributed agents. In this way, each agent computes the attack paths on a smaller portion of the graph which is finally combined with the other paths to complete the generation. The approaches based on parallel computing <cit.> instead adapt the graph search algorithms (i.e., BFS and DFS), that are designed for serial computation, to parallel algorithms that allow running the AG generation in multi-core environments.Our approach is agnostic to potential distribution or parallelization, as it acts on the workflow of analysis and generation.Another possibility is using Artificial Intelligence (AI). For example, Li et al. <cit.> leverage Deep Learning and node2vec <cit.> to train a neural network with data from system logs and use it to predict dependencies between network hosts and label the sequence of events as potential attack paths. Besides, Ychao et al. <cit.> model the path discovery problem as a planning problem on graphs. They translate vulnerability information to formalism actions and use state-of-the-art planners to discover attack paths.While Distributed and AI-based solutions provide a good improvement in the scalability of AG generation, they still require waiting until the generation is completed to perform the first analyses. In dynamic scenarios where network components change rapidly (and so do attack graphs), it is not reasonable to wait for the re-computation of the attack paths each time the environment changes.§.§ Attack Path AnalysisTo make AGs actionable for cyber risk analysis, different works addressed the scalability problem from the attack path analysis perspective. The central idea behind these works is the design of heuristics that prioritize the analysis of certain attack paths according to specific security requirements.Some of them prioritize the attack path analysis considering the graph topological structure: it is the case of Sun et al. <cit.>, who use nodes' in-degree and out-degree, and Gonda et al. <cit.>, who map AGs to planning graphs and leverages the nodes' centrality. Other works define priorities according to security metrics, as Liu et al. <cit.> who map potential exploits to the easiest reachable attack target, and then prioritize the analysis based on their probability. Similarly, Wang et al. <cit.> prioritize attack paths on the degree of matching between attack nodes and a set of predefined security conditions, while Feng et al. <cit.> associate a cost value to each attack node and prioritizes the attack paths based on the costs. These works take specific assumptions to assign priorities, and therefore they are fine only for scenarios that conform to such assumptions. Moreover, most of them are usually static assumptions, which do not necessarily follow the network exposure. Few works addressed the problem of considering attack path analysis to prioritize the generation. One of them is proposed by Yuan et al. <cit.> and Salayma et al. <cit.>. They model the AG through a graph database and perform attack path analysis by suitably writing queries to the database. While these solutions allow the expression of complex and highly customizable queries, they have the drawback that depend on the graph database, which may slow down the performance during the analysis <cit.>.Nadeem et al. <cit.> use alerts coming from Intrusion Detection Systems (IDS) to drive the attack path analysis. They translate alert events to episode sequences that are then used to build the AG and execute the analysis driven by alerts. Similarly, Hassan et al. <cit.> perform rule matching on system logs to identify the events that match attack behaviors described in security knowledge bases.While these alert-based approaches are context-aware, they may not represent the entire environment, such as network components with no alerts detected.§ LIMITATIONS AND OPPORTUNITIES This paper proposed a new framework to generate and analyze AGs based on progressive data analysis. To the best of the authors' knowledge, it is the first contribution to advancing the classic sequential AG-based process.In this section, we elicit some limitations as well as opportunities for the proposed work.First, we point out that our approach and its validation are applied to topological attack graphs <cit.>, where nodes represent the attacker states on the network and edges the lateral movements of vulnerability exploits. However, there exist logical AGs <cit.>, that model the attacker states according to the dependency relations expressed as logical axioms and formal rules. We believe that our approach can be easily adjusted to perform attack path analysis over logical AGs, for example, adapting the attack path construction with model-checking <cit.>. We consider this extension as future work.Additionally, we considered the likelihood and impact as attack path features since they are the main ones used in the existing literature on cyber risk management.Nonetheless, there exist different risk models that use different metrics, especially in domains such as cloud computing <cit.> and automotive <cit.>. These metrics may present a non-linear relation to the vulnerability features, thus making the decision tree training more challenging. To this aim, we suggest exploring other ML models, in particular Graph Neural Networks <cit.>, that are challenging to adapt to our approach due to the large amount of needed resources for training, the long training time not compatible with interactive analysis, and their black-box nature that make it difficult to retrieve the steering rules.Finally, another point worth discussing is the possible bias introduced by SteerAG.In fact, the steering rules tend to bias the distribution of vulnerability and attack path features towards the query requirements, as we expected since it actually accelerates the convergence to the complete AG. However, it would be interesting to study a way to keep the two distributions distinct, allowing both to answer correctly the analysis query and at the same time extract from its answer the part to add to statAG to keep its statistical significance and minimize introduced bias.Other considerations related to the proposed approach examine the opportunities it opens for further research. One involves the improvement of the decision tree predictions after the precision breakdown. Currently, we just re-train the decision tree with the newly incoming paths, so as to update the steering rules. However, more advanced approaches can be studied to detect the precision breakdown and efficiently re-train the ML model. A solution that we want to explore for this aim is Network Representation Learning <cit.>.An additional opportunity is to study the smart combination of analysis queries to create strategies for the generation of the AG, prioritizing the analyst goal by gaining a higher versatility than classic sampling methods like stratified sampling. These combinations can be initially executed even without the security operator's intervention, obtaining a highly dynamic way of prioritizing the AG to generate.Finally, while the proposed framework can be included in an automatic pipeline, where a process issues analysis queries, it opens the research for the design of human-in-the-loop systems to support security experts during the real-time risk analysis of attack paths, possibly with some form of active learning <cit.> where human interactions help accelerate the AG generation and analysis. § CONCLUSIONThis paper presented a new framework for AG-based systems addressing the problem of their scalability by leveraging progressive data analysis. We designed two approaches to progressively generate statistically significant partial AGs (StatAG) and accelerate the generation based on the attack path query (SteerAG). Additionally, it can better adapt to network changes (e.g., new connections or vulnerabilities in the network) without the need for recomputing the entire AG, a benefit that classic AG generation approaches lack. We provided extensive experimental validation and showed a case study demonstrating the capability of the framework to (i) analyze networks whose size cannot be managed with existing solutions, and (ii) efficiently support common attack path analyses. The approach is orthogonal to the existing ones and can be used in their combination without loss of generality. In future works, we intend toi) Investigate the application of StatAG and SteerAG to the class of logical AGs, ii) extend it to more heterogeneous risk models, iii) apply advanced learning solutions, such as network representation learning, and iv) develop an interactive system to validate the approach in real scenarios with security experts.IEEEtran§ ATTACK PATH SAMPLING ANALYSISIn this appendix, we report the analysis that shows the better performance of random walk sampling than BFS and DFS ones. We use the experimental setting defined in Section <ref> and report in Fig. <ref> the comparison of the convergence trend of the KS distance of partial AGs from the complete AG (i.e., the ground truth GT) for the three sampling algorithms.Considering the different trends, we can observe that the BFS sampling (Fig. <ref>) has the worst performance for two main reasons. First, the results of the experiments have higher variability if compared with DFS <ref> and random walk <ref>, indicating less controlled KS distances from the GT. Second, the median values have a higher constant trend of 0.2 that increases further for the impact. In contrast, random walk sampling has better performance, with low data variability right from the first iterations and with monotonically decreasing convergence. Finally, DFS sampling has lower variability than BFS, but the median values are at 0.2, while random walks represent well the complete AG with a median KS distance closer to 0.This analysis is in line with our expectation, because both BFS and DFS sampling are biased to the starting node, and as a consequence, they tend to visit the same edges and, therefore, the same vulnerabilities. Contrary, random walks enable visiting the graph in an unbiased manner, thus increasing the unique vulnerability visited and this results in a faster convergence to the attack path features of the complete AG. For this reason, we included random walk sampling in our approach.§ AN EXAMPLE OF ATTACK GRAPHIn this appendix, we provide an example of an attack graph, modeled according to the AG model described in section <ref>. For the sake of explanation, let us consider the simple network reported in Fig. <ref> <cit.>, in which the Demilitarized Zone (DMZ) is safeguarded by two firewalls, namely Firewall-1 and Firewall-2. It has a Web Server (H1) and a Login Server (H2). Firewall-1 shields the DMZ from external threats originating from the attacker located on Host H0, and it allows only http and ssh traffic. Firewall-2 safeguards the DMZ from internal threats and permits access to the Database server solely from the Web server. The vulnerability inventory is reported in Table <ref>, while the resulting attack graph is in Fig. <ref>.Host H1 is running a vulnerable version of Apache web server, which has a vulnerability (CVE-2006-3747) that allows a remote attacker to exploit and gain user privilege on the Web Server. The ssh service on H2 has a vulnerability (CVE-2002-0640), which allows remote attackers to gain user privilege. Database server H3 is a Linux box running MySQL database which has a remotely exploitable vulnerability (CVE-2009-2446), enabling the attacker to gain user privilege. The Linux kernel in host H3 also has a vulnerability (CVE-2004-0495) that allows local users to gain root privilege.Let us note that vulnerabilities V4 and V5 affect the same service, resulting in multi-edges in the attack graph.§ RISK MODEL DETAILSThis appendix provides additional details of the risk model introduced in Section <ref>. It considers CVSS standard metrics to estimate the likelihood and impact of an attack path. In particular, the likelihood is calculated according to the following formulas: exploitability(u) = RoundUp(AV · AC · PR · UI) likelihood(AP) = norm(∑_u ∈ U_AP1/exploitability(u)) Equation <ref> determines the exploitability of a vulnerability u by multiplying the CVSS-3.1 exploitability metrics, that are : * Attack Vector (AV): measure the context by which vulnerability exploitation is possible, thus a larger value corresponds to a more remote attacker;* Attack Complexity (AC): measures the conditions beyond the attacker’s control that must exist to exploit a vulnerability;* Privilege Required (PR): measures the level of privileges an attacker must possess before successfully exploiting a vulnerability;* User Interaction (UI): measures the requirement for a human attacker to successfully compromise a vulnerable component.Given that an attack path AP is composed of multiple vulnerabilities, then its likelihood is calculated as the normalized sum of the inverse of exploitability, as reported in Equation <ref>. This choice is due to the CVSS exploitability metrics, defined in the range [0,1], which have the rationale that the easier the exploitation of the vulnerability, the closer they are to one.Thus the likelihood is measured as the inverse of their product, summed for all the vulnerabilities in the path, normalized to be in the range [0,1].The impact of an attack is determined by the last node in the path, as it represents the attacker's final goal. In particular, let u_n be the last vulnerability of an attack path AP which allows it to finally reach the attacker target. Then, impact(AP) = 1 - [(1-C_n) · (1-I_n) · (1-A_n)] risk(AP) = likelihood(AP) · impact(AP) where the CVSS-3.1 impact metrics are used, that are: * Confidentiality (C): measures the impact on information access and disclosure.* Integrity (I): measures the impact on trustworthiness and veracity of information;* Availability (A): measures the impact on accessibility of information resources. Contrary to exploitability metrics, the CVSS impact metrics have the rationale that the more damages caused by successful exploitation, the closer they are to 1. For this reason, the impact is calculated by subtracting the value of the metrics from 1. Finally, equation <ref> expresses the risk of an attack path AP as the dot product of its likelihood and impact <cit.>. | http://arxiv.org/abs/2312.16513v1 | {
"authors": [
"Alessandro Palma",
"Marco Angelini"
],
"categories": [
"cs.CR",
"H.4; I.m"
],
"primary_category": "cs.CR",
"published": "20231227104458",
"title": "It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation"
} |
[Improved decoding of expander codes: fundamental trade-off between expansion ratio and minimum distance of inner code Kuan Cheng [Center on Frontiers of Computing Studies, Peking University, Beijing 100871, China. Email: [email protected]] Minghui Ouyang[School of Mathematical Sciences, Peking University, Beijing 100871, China. Email: [email protected]] Chong Shangguan[Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao 266237, China, and Frontiers Science Center for Nonlinear Expectations, Ministry of Education, Qingdao 266237, China. Email: [email protected]] Yuanting Shen[Research Center for Mathematics and Interdisciplinary Sciences, Shandong University, Qingdao 266237, China. Email: [email protected]] =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================type=figure< g r a p h i c s >figureVisualization of learned 3D language features of the previous SOTA method LERF and our LangSplat. While LERF generates imprecise and vague 3D features, our LangSplat accurately captures object boundaries and provides precise 3D language fields without any post-processing. While being effective, our LangSplat is also× faster than LERF at the resolution of 1440 × 1080.] ^∗ Equal contribution. ^†Corresponding authors.Human lives in a 3D world and commonly uses natural language to interact with a 3D scene. Modeling a 3D language field to support open-ended language queries in 3D has gained increasing attention recently. This paper introduces LangSplat, which constructs a 3D language field that enables precise and efficient open-vocabulary querying within 3D spaces. Unlike existing methods that ground CLIP language embeddings in a NeRF model, LangSplat advances the field by utilizing a collection of 3D Gaussians, each encoding language features distilled from CLIP, to represent the language field. By employing a tile-based splatting technique for rendering language features, we circumvent the costly rendering process inherent in NeRF. Instead of directly learning CLIP embeddings, LangSplat first trains a scene-wise language autoencoder and then learns language features on the scene-specific latent space, thereby alleviating substantial memory demands imposed by explicit modeling. Existing methods struggle with imprecise and vague 3D language fields, which fail to discern clear boundaries between objects. We delve into this issue and propose to learn hierarchical semantics using SAM, thereby eliminating the need for extensively querying the language field across various scales and the regularization of DINO features. Extensive experiments on open-vocabulary 3D object localization and semantic segmentation demonstrate that LangSplat significantly outperforms the previous state-of-the-art method LERF by a large margin. Notably, LangSplat is extremely efficient, achieving a× speedup compared to LERF at the resolution of 1440 × 1080. We strongly recommend readers to check out our video results at <https://langsplat.github.io/>. § INTRODUCTIONLanguage is the primary means of communication for human beings <cit.>. Modeling a 3D language field allows users to interact with and query 3D worlds using open-ended language, which presents a promising avenue for human-computer interaction and understanding <cit.>. The field of open-ended language queries in 3D has attracted increasing attention due to its various applications such as robotic navigation <cit.> and manipulation <cit.>, 3D semantic understanding <cit.> and editing <cit.>, autonomous driving <cit.>, and augmented/virtual reality <cit.>.Due to the absence of large-scale and diverse 3D scene data with language annotations, the current prevailing approach like LERF <cit.> involves feature distillation from off-the-shelf vision-language models such as CLIP into a 3D scene.However, these methods <cit.> suffer from significant limitations in both speed and accuracy, severely constraining their practical applicability. To address these two issues, we revisit two key aspects of 3D language field modeling: the 3D modeling approach that bridges the gap between 2D and 3D, and the rendering target, which determines what to learn for a 3D point. For the 3D modeling technology, most methods utilize neural radiance fields (NeRFs) to represent a 3D scene, where volume rendering techniques are employed to accumulate 3D points along a ray into a single pixel. While NeRF has been widely demonstrated for its powerful 3D representation capabilities <cit.>, the nature of volume rendering leads to its computationally expensive rendering speed <cit.>, which imposes notable constraints on the potential applications within the NeRF-based language field. Regarding the rendering target, learning a CLIP embedding for a 3D point could be ambiguous as CLIP embeddings are aligned with images rather than pixels. Employing CLIP embeddings from a cropped patch also raises the point ambiguity issue, as the same 3D position can be associated with semantic concepts of varying scales. For instance, a point located on a bear's nose should yield high response values for three distinct textual queries: “bear's nose”, “bear's head”, and “bear” given that this point contributes to all three hierarchical regions. To deal with this issue, current methods <cit.> introduce an additional absolute scale input to NeRF, trains with patch-wise CLIP features at different scales, and densely renders 2D maps at multiple scales during querying to select the optimal one.However, this scale-based solution compromises both efficiency and effectiveness. It could increase query time by up to 30 times as it needs to render at multiple different scales. Moreover, most patches with varying scales often fail to accurately encompass objects, either frequently including other objects from the background or omitting portions of the target object. These inaccurate CLIP features lead to the trained 3D language field lacking clear boundaries and containing a significant amount of noise. Therefore, they often simultaneously learn pixel-aligned DINO features to mitigate this issue. However, the performance remains unsatisfactory. As shown in Figure <ref>, LERF still generates imprecise 3D language features. In this paper, we propose the 3D Language Gaussian Splatting (LangSplat) to address the above issues. Instead of using NeRF to build 3D representations, we resort to 3D Gaussian Splatting, which represents a 3D scene as a collection of 3D Gaussians and uses tile-based splatting to achieve efficient rendering at high resolutions. Our LangSplat defines a set of 3D language Gaussians, with each Gaussian being enhanced by a language embedding. These language-enhanced Gaussians are supervised using CLIP embeddings extracted from image patches captured from multiple training views, ensuring multi-view consistency.As an explicit modeling method, directly storing the high-dimensional language embeddings for each 3D language Gaussian is memory-inefficient. To reduce the memory cost and further improve the rendering efficiency, we propose to first learn a scene-wise language autoencoder, which maps CLIP embeddings in a scene to a low-dimensional latent space. In this way, each language Gaussian only contains the low-dimensional latent language features and the final language embeddings are obtained with decoding of the rendered features. To address the point ambiguity issue, we propose to employ the semantic hierarchy defined by the Segment Anything Model (SAM) <cit.>. Specifically, for each 2D image,we obtain three well-segmented maps at different semantic levels with SAM. Then we extract the CLIP feature for each mask with precise object boundaries and assign this feature to every point on the corresponding mask. Learning with SAM-based masks not only endows each point with precise CLIP embeddings, resulting in higher model accuracy, but also enables direct querying at predefined three semantic scales. This circumvents the need for intensive searches across multiple absolute scales and the auxiliary DINO features, thereby effectively improving efficiency. We summarize the contributions of this paper as follows:* We propose the LangSplat, which is the first 3D Gaussian Splatting-based method for 3D language fields. A scene-specific autoencoder is further introduced to alleviate the memory cost issue imposed by explicit modeling. * We propose to learn the hierarchical semantics defined by SAM to address the point ambiguity issue for 3D language field modeling. * Experimental results show that our method outperforms the state-of-the-art methods on open-vocabulary 3D object localization and semantic segmentation tasks while being× faster than LERF at 1440 × 1080 resolution. § RELATED WORK 3D Gaussian Splatting. Real-time rendering has always been a pursued objective for neural rendering. Recently, Kerbl <cit.> proposed to represent the 3D scene with a set of 3D Gaussians, which attained real-time rendering for 1080p resolution while maintaining state-of-the-art visual quality. Encouraged by the success of 3D Gaussian Splatting on novel view synthesis, many works extend it to other tasks to fully exploit the efficient rendering process. To achieve real-time dynamic scene rendering, some studies <cit.> have extended the 3D Gaussian Splatting technique to dynamic scenes. Luiten <cit.> proposed the Dynamic 3D Gaussians, which extended 3D Gaussians to dynamic scenes by explicitly modeling the 3D Guassians across different time steps. Yang <cit.> presented a deformable 3D Gaussians Splatting method, which learned 3D Gaussians in canonical space and modeled the dynamic scenes with a deformation field. Meanwhile, some researchers have combined 3D Gaussian Splatting with diffusion models to achieve efficient text-to-3D generation <cit.>. For example, Tang <cit.> introduced DreamGaussian for efficient 3D content generation with a generative 3D Gaussian Splatting model. Unlike these methods, our paper extends each 3D Gaussian with language embeddings for open-vocabulary 3D queries. SAM. The Segment Anything Model <cit.>, released by Meta in 2023, has attracted considerable attention <cit.>. SAM is trained on over 1 billion masks from 11 million images and has achieved impressive zero-shot performance. It has become the foundational model for image segmentation. It supports flexible prompts including point, box, mask, and text. SAM has been used for many computer vision tasks such as image inpainting <cit.>, super-resolution <cit.>, image matting <cit.>, object tracking <cit.>, medical image segmentation <cit.>, image editing <cit.>, and so on. Many efforts have also been made to utilize SAM in the 3D domain. Liu <cit.> proposed Seal to explore the potential of VFMs including SAM for point cloud segmentation.SA3D <cit.> generalized SAM to 3D objects by leveraging NeRF to connect 2D images and 3D space.Anything-3D <cit.> proposed to elevate objects to 3D, where SAM is used to segment the interested object and then a single-view 3D reconstruction pipeline was performed. Different from these works, we use SAM to obtain accurate object masks with three well-defined hierarchical semantics to train a 3D language field.3D Langugae Fields. Some early attempts to construct 3D feature fields included Distilled Feature Fields <cit.> and Neural Feature Fusion Fields <cit.>. They learned 3D-consistent features by distilling LSeg <cit.> or DINO <cit.> features across multiple views into a NeRF. Shen <cit.> further adopted distilled feature fields for few-shot language-guided robotic manipulation by distilling CLIP feature into a NeRF. There are also some efforts <cit.> that embed semantic information into NeRFs. For example, Semantic NeRF <cit.> jointly encoded semantics with appearance and geometry within a NeRF for novel semantic view synthesis.LERF <cit.> was the first to embed CLIP features into NeRF, enabling open-vocabulary 3D queries leveraging the powerful CLIP representation. DINO features were also used for supervising LERF to improve its performance. Liu <cit.> also utilized CLIP and DINO features to train a NeRF model for 3D open-vocabulary segmentation. While these methods use NeRF for 3D modeling and suffer from the costly rendering process, we propose the 3D language Gaussian Splatting to obtain efficient 3D language fields.§ PROPOSED APPROACH In this section, we first revisit the challenges of modeling 3D language fields and identify the key factors for inaccuracy and inefficiency. Subsequently, we then elaborate on how our proposed LangSplat addresses these issues. Figure <ref> depicts the framework of our proposed LangSplat. §.§ Revisiting the Challenges of Language Fields We denote an input image as I∈ℝ^3 × H × W, where H and W represent the height and weight of the image size. We take a set of calibrated images {I_t | t =1,2,...T} as input and train a 3D language field Φ with these images. Most existing methods <cit.> employ the CLIP image encoder V to extract image features and utilize the extracted CLIP embeddings to supervise the 3D language field Φ, leveraging the well-aligned text-image latent space provided by CLIP, thus facilitating open-vocabulary queries. However, CLIP embeddings are image-aligned rather than pixel-aligned. In other words, simply computing V(I_t) ∈ℝ^D only obtains an image-level feature, whereas what we need is a pixel-aligned language embedding L_t ∈ℝ^ D × H × W, where D represents the CLIP feature dimension. Meanwhile, modeling pixel-aligned language features faces the issue of point ambiguity, as a single point on an object contributes to multiple semantic levels of regions. For instance, a point on a cat's ear simultaneously contributes to the cat's ear, the cat's head, and the entire cat, and should be activated by all three types of textual queries. To address these issues, most existing methods <cit.> extract a hierarchy of CLIP features from cropped image patches. Specifically, for a pixel with coordinates v ∈{1, ..., H}×{1, ..., W}, the corresponding CLIP features are obtained from image patches centered around v at different absolute physical scales s, with the expectation that at a certain scale s, the patch can fully encompass the object. However, this multi-scale approach has two limitations. Firstly, patch features are imprecise because they often include additional contextual object information, leading to overly smoothed language fields with indistinct object boundaries. To alleviate the patchy issue, most methods <cit.> leverage additional pixel-aligned DINO features to supervise the network. However, the learned 3D language features are still imprecise, as illustrated in Figure <ref>. Secondly, it requires simultaneous rendering at multiple scales during inference to find the optimal scale. With the number of scales s potentially reaching as high as 30 <cit.>, this significantly diminishes the inference speed. Besides the rendering target, another key design space is the 3D modeling approach. Most existing methods <cit.> employ NeRFs for 3D representation, where they learn a language feature at each 3D point and subsequently render the language feature onto an image, similar to color rendering. However, NeRF-based methods are constrained by their time-consuming rendering process, even though the most advanced NeRF techniques currently available cannot achieve real-time rendering in high-resolution, unrestricted scenes <cit.>. Meanwhile, there is a high demand for efficient open vocabulary querying in practical applications, especially in fields such as intelligent robotics. §.§ Learning Hierarchical Semantics with SAMAs a foundation model for image segmentation, SAM <cit.> can accurately group a pixel with its surrounding pixels belonging to the same object, thereby segmenting the image into many object masks with clear boundaries. Furthermore, SAM addresses point ambiguity by generating three different masks for a point prompt, namely, whole, part, and subpart, representing three hierarchical levels of semantics. In this paper, we propose leveraging SAM to obtain precise object masks, which are then used to acquire pixel-aligned features. We also explicitly model the semantic hierarchy defined by SAM to address the point ambiguity issue. With SAM, we can capture the semantic hierarchy of objects in 3D scenes, providing accurate and multi-scale segmentation maps for each input image.Specifically, we feed a regular grid of 32 × 32 point prompts into SAM to obtain the masks under three different semantic levels: M_0^s, M_0^p, M_0^w, where M_0^s, M_0^p, and M_0^w represent the masks at subpart, part, and whole levels, respectively. Then we remove redundant masks for each of the three mask sets based on the predicted IoU score, stability score, and overlap rate between masks. Each filtered mask set independently performs a comprehensive full-image segmentation based on its respective semantic level, resulting in three segmentation maps: M^s, M^p, M^w. These segmentation maps precisely delineate the boundaries of objects at their hierarchical levels, effectively partitioning the scene into semantically meaningful regions. With the obtained segmentation maps, we proceed to extract CLIP features for each segmented region. These features capture the semantic context of the objects at various levels within the scene. Mathematically, the obtained pixel-aligned language embeddings are:L^l_t(v) = V (I_t ⊙M^l(v)), l ∈{ s,p,w },where M^l(v) represents the mask region to which pixel v belongs at the semantic level l.Each pixel rendered from the 3D language scene now possesses a CLIP feature that aligns with its precise semantic context. This alignment reduces ambiguity and enhances the accuracy of language-based queries.We can learn an accurate 3D language field even without the commonly used DINO regularization.Another advantage of our SAM-based approach is the predefined semantic scales. Since we have distinct segmentation maps for “whole," “part," and “subpart" levels, we can directly query the 3D language field at these predefined scales. This eliminates the need for intensive searches across multiple absolute scales, making the querying process more efficient. By incorporating SAM's semantic hierarchy into our approach, we not only improve the accuracy of our 3D language field but also streamline the querying process, making it more efficient and effective for a wide range of applications.§.§ 3D Gaussian Splatting for Language Fields Having obtained the language embeddings on a set of 2D images {L^l_t, | t =1, ..., T}, we can learn a 3D language scene by modeling the relations between 3D points and 2D pixels.Most existing methods <cit.> suffer from the costly rendering process as they adopt NeRFs for 3D modeling. To address this issue, we present the first 3D Gaussian Splatting-based method for 3D language field modeling. 3D Gaussian Splatting explicitly represents a 3D scene as a collection of anisotropic 3D Gaussians, with each Gaussian G(x) characterized by a mean μ∈ℝ^3 and a covariance matrix Σ:G(x) = exp (- 1/2 (x - μ)^⊤Σ^-1 (x -μ)). To optimize the parameters of 3D Gaussians, they are rendered into 2D image planes <cit.>, and a tile-based rasterizer is used to improve the rendering efficiency:C(v) = ∑_i ∈𝒩 c_i α_i ∏_j=1^i-1 (1 - α_j),where c_i is the color of the i-th Gaussian, 𝒩 denotes the Gaussians in the tile, C(v) is the rendered color at pxiel v, and α_i = o_i G^2D_i(v). Here o_i is the opacity of the i-th Gaussian and G^2D_i(·) represents the function of the i-th Gaussian projected onto 2D. In this paper, we proposes the 3D language Gaussian Splatting, which augments each 3D Gaussian with three language embeddings {f^s,f^p,f^w}. These embeddings are derived from CLIP features, which capture the hierarchical semantics provided by SAM. The augmented Gaussians are named as 3D language Gaussians. We also adopt the tile-based rasterizer to retain the rendering efficiency: F^l(v) = ∑_i ∈𝒩f_i^l α_i ∏_j=1^i-1 (1 - α_j), l ∈{s,p,w},where F^l(v) represents the language embedding rendered at pixel v with the semantic level l. By incorporating language information directly into the Gaussians, we enable the 3D language field to respond to language-based queries.As an explicit modeling approach, our LangSplat may create millions of 3D points to model a complex 3D scene. As CLIP embeddings are high-dimensional features, directly learning f^l on the CLIP latent space significantly increases memory consumption. Compared to learning RGB colors without spherical harmonics coefficients, learning 512-dimensional CLIP features increases the memory requirements for storing 3D Gaussians by over 35 times, easily leading to the “out of memory" issue.To reduce memory consumption and improve efficiency, we introduce a scene-wise language autoencoder. This autoencoder maps CLIP embeddings in a scene to a lower-dimensional latent space, reducing memory requirements. The CLIP model is trained using 400 million (image, text) pairs, and its D-dimensional latent space could be highly compact, as it needs to align arbitrary text and images in this space. However, the language field Φ we train here is scene-specific, meaning we can leverage scene priors to compress CLIP features. In fact, for each input image, we will obtain hundreds of masks segmented by SAM, which is significantly smaller than the number of images used in CLIP training. Therefore, all the segmented regions in a scene are sparsely distributed in the CLIP latent space, allowing us to further compress these CLIP features using a scene-specific autoencoder.Specifically, we use the collections of CLIP features of SAM segmented masks {L_t^l | l ∈{s,p,w}, 1 ≤ t ≤ T} to train a lightweight autoencoder. An encoder E maps the D-dimensional CLIP features L_t^l(v) ∈ℝ^D to H_t^l(v) = E(L_t^l(v)) ∈ℝ^d, where d ≪ D. Then we learn a decoder Ψ to reconstruct the original CLIP embeddings from the compressed representation. The autoencoder is trained with a reconstruction objective on the CLIP embeddings {L_t^l}:ℒ_ae=∑_l ∈{s,p,w}∑_t=1^T d_ae(Ψ(E(L_t^l(v))), L_t^l(v)),where d_ae() denotes a distance function used for the autoencoder. Here we adopt both ℒ_1 and a cosine distance loss. After training the autoencoder, we transform all CLIP embeddings {L_t^l} into scene-specific latent features {H_t^l}. We let our 3D language Gaussians learn language embeddings in the scene-specific latent space instead of the CLIP latent space. Therefore, we have f^l ∈ℝ^d. In practice, we choose d =3 as it yields excellent model efficiency and accuracy. Compared to directly modeling the D-dimensional CLIP embeddings, our method significantly reduced the memory cost by incorporating scene priors. We optimized the language embeddings with the objective:ℒ_lang=∑_l ∈{s,p,w}∑_t=1^T d_lang( F_t^l(v), H_t^l(v)),where d_lang() denotes the distance function used for our 3D Language Gaussians.During inference, we follow the Eq. (<ref>) to render the language embeddings from 3D to 2D, and then we use the trained scene-specific decoder Ψ to recover the CLIP image embeddings Ψ(F_t^l) ∈ℝ^D × H × W, which enable open-vocabulary queries with the CLIP text encoder.By enhancing 3D Gaussians with language embedding and employing a scene-wise language autoencoder, our proposed LangSplat presents a powerful and efficient solution for building 3D language fields. This approach not only preserves the rendering efficiency of Gaussian Splatting but also mitigates the catastrophic memory explosion associated with explicit modeling.§.§ Open-vocabulary Querying Due to the well-aligned latent space between images and text provided by the CLIP model, our learned 3D language field can easily support open-vocabulary 3D queries, including open-vocabulary 3D object localization and open-vocabulary 3D semantic segmentation.Many existing open-vocabulary 3D semantic segmentation methods <cit.> usually select the category from a category list, which includes the categories present in the images. However, obtaining a comprehensive category list for in-the-wild scenes is challenging. Different from them, our method generates precise object masks given an arbitrary text query.Following LERF <cit.>, we compute the relevancy score for each text query. Specifically, for each rendered language embedding ϕ_img and each text query ϕ_qry, the relevancy score is defined asmin_i exp(ϕ_img·ϕ_qry)/exp(ϕ_img·ϕ_qry) + exp(ϕ_img·ϕ_canon^i), where ϕ_canon^i is the CLIP embeddings of a predefined canonical phase chosen from “object", “things", “stuff", and “texture". Hence, for each text query, we will obtain three relevancy maps, each representing results at a specific semantic level. We follow the strategy used in LERF <cit.> and choose the semantic level that yields the highest relevancy score. For the 3D object localization task, we directly choose the point with the highest relevance score. For the 3D semantic segmentation task, we filter out points with relevancy scores lower than a chosen threshold, and predict the object masks with remaining regions. Please refer to the appendix for additional details.§ EXPERIMENTS §.§ Settings Datasets. We employ two datasets for evaluation. The LERF dataset <cit.> is captured using the iPhone App Polycam, which consists of complex in-the-wild scenes. The LERF dataset is designed for 3D object localization tasks, here we extend the LERF dataset by annotating ground truth masks for textual queries, enabling the evaluation of the open-vocabulary 3D semantic segmentation on the LERF dataset. As the original LERF annotations for 3D object localization are relatively simple, performance in some scenarios has already approached saturation. Therefore, we further manually annotated additional challenging localization samples to better evaluate method performance. We report localization accuracy for the 3D object localization task following LERF <cit.>, and report the IoU results for the 3D semantic segmentation task. We also employ the 3D-OVS dataset <cit.>, which comprises a collection of long-tail objects captured in diverse poses and backgrounds. This dataset is developed for open-vocabulary 3D semantic segmentation,where a full list of categories is provided. While other methods use the full list to generate the predicted masks, we only use the query category to generate the corresponding masks. The mIoU metric is used for this dataset.Implementation Details. To extract the language features of each image, we utilize the OpenCLIP ViT-B/16 model. For SAM, we use the ViT-H model to segment 2D masks. For each scene, we first use 3D Gaussian Splatting to train an RGB scene.We train it for 30,000 iterations, and in the end, each scene comprises around 2,500,000 points. We follow the default parameter setting as in <cit.> to train the RGB scene. Then we train our 3D language Gaussians by fixing all other parameters of 3D Gaussians such as mean and opacity. Only the language features are learnable during this stage.We train the language features for 30,000 iterations. Our autoencoder is implemented by MLPs, which compress the 512-dimensional CLIP features into 3-dimensional latent features. For a scene with 1080p resolution, our model is trained for 25 minutes on an NVIDIA RTX-3090 GPU and takes roughly 4GB of memory in total. §.§ Results on the LERF dataset Quantitative Results. We first compare our method with other methods on the LERF dataset. Table <ref> shows the localization results. We observe that our method achieves an overall accuracy of 84.3%, significantly outperforming LERF. Table <ref> further shows the IoU results of 3D semantic segmentation, our method outperforms LERF by 14.0%, which illustrates the superiority of our proposed LangSplat. Visualization Results. To show the learned 3D language field, we visualize the learned features by computing 3-dimensional PCA components of learned language features following <cit.>. The results are shown in Figure <ref>. We see that the LERF learned features fail to generate clear boundaries between objects while our method gives precise object shapes solely using CLIP features. We further show the visualization results of object localization and semantic segmentation in Figure <ref> and Figure <ref>, respectively. We observe that the activation regions generated by LERF are more dispersed, while ours are more concentrated, and our activation regions can better align with the ground truth shape compared to those produced by LERF. Ablation Study. We conduct ablations on the ramen scene and report the semantic segmentation results in Table <ref>. We test the query speed on an NVIDIA RTX-3090 GPU. Here AE represents autoencoder and 3D-GS denotes 3D Gaussian Splatting. Without any of our proposed components, our baseline equals LERF, which has a speed of 30.93 seconds per text query at the resolution of 988 × 731. Using SAM to replace the scale-based solution significantly increases the IoU by 18.54%, showing our SAM-based solution effectively addresses the point ambiguity issue, leading to accurate 3D language features. Simply replacing NeRF with 3D Gaussian Splatting leads to the out-of-memory issue as explicitly modeling CLIP features poses huge memory demands. Incorporating a scene-specific autoencoder effectively addresses this issue and results in further improvements in both accuracy and efficiency. In the end, our LangSplat achieved a 119 × speedup over LERF while significantly surpassing LERF in terms of accuracy. We further conducted the ablations on the 3D-OVS dataset, which has a higher image resolution of 1440 × 1080. Table <ref> lists the results on the bench scene. We also tested the query speed on an NVIDIA RTX-3090 GPU. We observed that with the increase in image resolution, the speedup over LERF further improved to 199 ×, which demonstrates the huge potential of our method. Actually, the rendering process in our method is highly efficient. Most of the computational time is allocated to the decoder rather than the rendering process. Therefore, as the resolution increases, the computational cost of our method only experiences a slight increase. We believe that a higher speedup can be achieved when dealing with higher-resolution scenes.§.§ Results on the 3D-OVS datasetQuantitative Results. We compare our method with other 2D and 3D state-of-the-art methods on the 3D-OVS dataset in Table <ref>. We observe that our method not only outperforms 2D-based methods such as ODISE <cit.> and OV-Seg <cit.>, but also achieves better results than 3D-based methods including LERF <cit.> and 3D-OVS <cit.> by a large margin. Note that in this dataset, we generate object masks only based on the query text while others, such as 3D-OVS, require the complete category list. In the end, our method achieves an overall mIoU of 93.4%,which demonstrates that our method effectively learns a precise 3D language field.Qualitative Results. We present the qualitative results in Figure <ref>. As LERF suffers from the patchy issue and learns over-smoothed features, it fails to find accurate object boundaries. Among all state-of-the-art methods, our methods give the most accurate segmentation maps, which further demonstrates the effectiveness of our LangSplat.§ CONCLUSION In this paper, we have presented LangSplat, a method for constructing 3D language fields that enables precise and efficient open-vocabulary querying within 3D spaces. By extending 3D Gaussian Splatting with language features and learning a scene-specific language autoencoder, LangSplat circumvents slow rendering speed associated with NeRF-based methods. Furthermore, we propose to learn the semantic hierarchy defined by SAM, which effectively resolves the point ambiguity problem, enabling more precise and reliable 3D language fields. The experimental results clearly demonstrate LangSplat's superiority over existing state-of-the-art methods like LERF, particularly in terms of its remarkable× speed improvement and enhanced performance in open-ended 3D language query tasks. ieeenat_fullname § VIDEO DEMOIn Figure 1 of our main paper, we have visualized the language features learned by LERF and our method. For a fair comparison, we perform PCA on the decoded feature Ψ(F_t^l) ∈ℝ^D × H × W for our method. However, one benefit of our method is that we are able to directly visualize the learned language features in the encoded 3-dimensional latent space, which can ensure color consistency between frames.[The consistency of color in PCA visualizations across different frames is not ensured.] Specifically, we normalize the encoded 3-dimensional latent features H_t^l(v)∈ℝ^3 × H × W and visualize them by treating the 3-dimensional features as RGB channels.We strongly recommend readers refer to our video demo to observe the learned 3D language fields in the scene-specific latent space. The video demonstrates that our method has acquired a 3D language representation that is both 3D-consistent and distinctly shaped, which significantly distinguishes it from existing methods that often only learn 3D language representations with blurred boundaries. Meanwhile, our approach achieves a speedup of 119 × compared to LERF at a resolution of 988 × 731 and further improves to 199 × faster at a resolution of 1440 × 1080.§ MORE IMPLEMENTATION DETAILSFor each text query, we can obtain three relevancy maps with our trained 3D language Gaussians, each representing one semantic level defined by SAM. Then we use different strategies to choose the best semantic level and obtain the predictions for different tasks.3D Object Localization on LERF.To mitigate the impact of outliers, we first employ a mean convolution filter with a size of 20 to smooth the values of three relevancy maps. For the smoothed relevancy maps, we select the one with the highest smoothed relevancy score and take the corresponding position as the final prediction. 3D Semantic Segmentation on LERF. Similarly, to mitigate the influence of outliers, we apply a mean filter with a size of 20 to smooth the three relevancy maps. Subsequently, we select the relevancy map with the maximum smoothed relevancy score for binary mask prediction. For the selected relevancy map, we first normalize its relevancy scores and then use a threshold to obtain a binary image as the final prediction mask. 3D Semantic Segmentation on 3D-OVS.For each class query, we obtain three relevancy maps. We apply a threshold of 0.4 to these relevancy maps, setting relevancy scores below 0.4 to 0 and relevancy scores above 0.4 to 1, resulting in three binary maps. We calculate the average relevancy scores within the mask region for each relevancy map and select the relevancy map with the highest average response as the final predicted binary map. § MORE QUANTITATIVE RESULTSIn addition to the mIoU metric, the Accuracy metric is also employed on the 3D-OVS dataset in <cit.>. [After carefully reviewing the codes and results, we discovered that the mAP results reported in <cit.> are, in fact, the Accuracy results. ] Therefore, we also compare our method with other state-of-the-art methods on the 3D-OVS dataset using the Accuracy metric. The results are shown in Table <ref>. We observe that our method consistently outperforms other methods, which further illustrates the superiority of our method.§ MORE ABLATION STUDYTo reduce the memory cost of our 3D language Gaussians, we proposed the scene-specific autoencoder to learn a latent feature. We show the ablation results of different latent dimensions d on the bench scene of the 3D-OVS dataset in Table <ref>. We observed that as d increases, the mIoU performance improves, with only a slight increase in the time cost. We chose d=3 because it allows us to directly visualize the learned 3D language field in the latent space by treating the 3-dimensional features as the RGB channels. We also strongly encourage readers to refer to our video demo to observe how our learned language field accurately captures the precise 3D shape of objects in the scene-specific latent space. § MORE VISUALIZATION RESULTS3D Object Localization on LERF. We visualize more examples on the LERF dataset for open-vocabulary 3D object localization in Figure <ref>. We found that for text queries such as “red apple" and “plate", LERF failed to correctly locate the 3D positions, whereas our method succeeded. For text queries like “waldo" and “chopsticks", although LERF could identify the correct location, its activation values were more dispersed, whereas our method was able to focus more precisely on the queried object. 3D Semantic Segmentation on LERF. We demonstrate more examples on the LERF dataset for open-vocabulary 3D semantic segmentation in Figure <ref>. We observed that the results produced by LERF were unable to provide the precise shape of the queried object and exhibited a significant amount of noise, whereas our method could accurately depict the object's shape. These results show the effectiveness of our proposed LangSplat.3D Semantic Segmentation on 3D-OVS. We show more scenes on the 3D-OVS dataset for open-vocabulary 3D semantic segmentation in Figures <ref>, <ref>, <ref>, and <ref>, respectively. Compared to the previous state-of-the-art method 3D-OVS, our approach provides more precise object boundaries and exhibits reduced noise, which illustrates that our LangSplat learns a more accurate 3D language field. | http://arxiv.org/abs/2312.16084v1 | {
"authors": [
"Minghan Qin",
"Wanhua Li",
"Jiawei Zhou",
"Haoqian Wang",
"Hanspeter Pfister"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231226151437",
"title": "LangSplat: 3D Language Gaussian Splatting"
} |
Hierarchical variable clustering based on the predictive strength between random vectors[This version: January 14, 2024]Sebastian Fuchs[Department for Artificial Intelligence & Human Interfaces, University of Salzburg, Hellbrunner Strasse 34, 5020, Salzburg, Austria, [email protected] ]Yuping Wang[Department for Artificial Intelligence & Human Interfaces, University of Salzburg, Hellbrunner Strasse 34, 5020, Salzburg, Austria, [email protected]] [1ex]============================================================================================================================================================================================================================================================================================================================================================================== A rank-invariant clustering of variables is introduced that is based on the predictive strength between groups of variables, i.e., two groups are assigned a high similarity if the variables in the first group contain high predictive information about the behaviour of the variables in the other group and/or vice versa. The method presented here is model-free, dependence-based and does not require any distributional assumptions. Various general invariance and continuity properties are investigated, with special attention to those that are beneficial for the agglomerative hierarchical clustering procedure. A fully non-parametric estimator is considered whose excellent performance is demonstrated in several simulation studies and by means of real-data examples. 62G05; 62H05; 62H20§ INTRODUCTION A hierarchical variable cluster analysis aims at identifying interrelationships and associations present within a portfolio = {X_1, …, X_m} of m ≥ 3 random variables and, thus, revealing important insights about the inner structure of the portfolio. Variable clustering can therefore be particularly useful as a pre-processing step when analysing complex structured data in an unsupervised setting. The key idea underlying this type of data analysis is to arrange similar objects (i.e., random variables) into groups (i.e., clusters) <cit.>, though what is meant by similarity may have very different interpretations.A common approach in hierarchical clustering so far is to consider similarity as a feature that occurs primarily between two individual random variables and that is detectable using measures of pairwise association such as (Pearson) correlation, Kendall's tau, Spearman's rho, or variants of these, in combination with some linkage function that assigns a degree of similarity also to groups of more than two random variables but takes into account exclusively pairwise information (see, e.g., <cit.> for clustering financial data,<cit.> for clustering gene expression data, or <cit.> for clustering preferences). In this context, similarity is understood as a pairwise comovement of random variables, either linear or monotonic.Alternative measures of bivariate similarity rely on tail dependence and tail correlation <cit.> or on divergence measures such as mutual information <cit.>.The approaches mentioned above share a common drawback: They all treat similarity as a pairwise feature and hence neglect possible higher-dimensional dependencies that can occur among more than two random variables. Recently, in <cit.>, the authors solved this issue by introducing a general framework for quantifying multivariate similarity enabling higher dimensional dependencies to be taken into account. In particular, they proposed several similarity measures based on multivariate concordance, thus treating similarity as equivalent to comonotonicity (an extreme dependence concept studied, e.g., in <cit.>). With the same objective, in <cit.>, the authors proposed Φ-dependence for quantifying multivariate similarity, thus treating similarity as equivalent to the presence of a dependence structure that is singular with respect to the product of its marginals.In contrast to <cit.>, in the present paper, two (disjoint) subsets , ⊆ of random variables are assigned high similarity if the variables in setcontain high predictive information about the behaviour of the variables in the other setand/or vice versa. Thus, if two groups of random variables are clustered, then a high degree of functional dependence occurs between the two groups; in other words: the variables in one of the groups can be used to predict the variables in the second group.For that purpose, a hierarchical clustering procedure is introduced that is based on measures of predictability for multi-outcome data. Such a measure (of directed dependence) assigns to every pair of subsets (, ) a value in [0,1], equals 0 if and only ifthe variables withinare independent of those within , andequals 1 if and only if the variables withinperfectly depend on the variables within , i.e.,each Y_k ∈ equals a measurable function of the variables withinalmost surely. To the best of the authors' knowledge, so far the only two types of measure of predictability applicable to multi-outcome data have been introduced in <cit.> and <cit.>, whereby we here focus on the method introduced in <cit.> due to its numerous favourable properties, the latter being a natural extension of Chatterjee's famous coefficient of correlation introduced in <cit.>. As a consequence, similarity is treated here as equivalent to (mutual) perfect dependence, a dependence concept being less restrictive than comonotonicity used in <cit.> but at the same time not as comprehensive as the concept of singularity used in <cit.>.The paper is organized as follows:Following and adapting the approach in <cit.> for quantifying multivariate similarity, in Section <ref> we introduce so-called dissimilarity functions being capable of detecting perfect dependence (type <ref>) or mutual perfect dependence (type <ref>) between two non-empty subsets of .For each type of dissimilarity function, key properties are presented and (where possible) its relation to alternative notions of multivariate similarity available in the literature (see, e.g., <cit.>) is discussed.Evidently, the choice of the dissimilarity function in question is crucial for the clustering result; different clustering outputs may be obtained from the same data set when applying different dissimilarity functions. And these clustering results may then provide new insights into the nature of the data, such as unexpected substructures whose revelation may trigger a more advanced data analysis. In view of the potential impact of the chosen dissimilarity function on the clustering result, it is worth investigating whether the former exhibits certain desirable properties like invariance and continuity (Section <ref>). The invariance properties indicate that the introduced dissimilarity functions are dependence-based, and continuity ensures that a certain level of noise presented in the data does not cause the final clustering result to deviate too much. In Section <ref> estimators for the proposed type <ref> and <ref> dissimilarity functions are presented and it is shown that these estimators are strongly consistent and can be computed in O(n log n) time. Finally, Section <ref> contains several simulation studies and real data examples which are used (1) for evaluating the performance of the different candidates for type <ref> and type <ref> dissimilarity functions, (2) for comparing the agglomerative hierarchical variable clustering methods based on perfect dependence (type <ref>) and mutual perfect dependence (type <ref>) to alternative clustering methods available in the literature,(3) for elaborating their resilience to noise as an application of the above-mentioned continuity property and (4) for evaluating their performance when the number of random variables to be clustered is large. Throughout this manuscript, let m ≥ 3 be an integer which will be kept fixedand consider a (finite) set/portfolio ={X_1,…,X_m} of random variables defined on the same probability space (Ω, , ) which are always assumed to be non-degenerate, i.e., for all k ∈{1,…,m} the distribution of X_k does not follow a one-point distribution.Any subset ofwill be denoted by upper-case black-board letters, e.g., ,and let _0 () denote the set of all non–empty subsets of . Given a subset = {X_1, …, X_p}∈_0 () with p ≤ m,we indicate bya vector representation of , i.e., a p–dimensional random vector whose coordinates are distinct elements from . Clearly, the vector representation of anyneed not be unique.Further, let L^0(^p) denote the space of all p-dimensional random vectors. § HIERARCHICAL VARIABLE CLUSTERING BASED ON (MUTUAL) PERFECT DEPENDENCE Before turning to the hierarchical variable clustering procedure,we shall first specify what exactly we mean by similarity of two non-empty subsets of(not necessarily equal in cardinality), and how this similarity is to be quantified (Subsection <ref>). We achieve this by following the approach in <cit.> and introducing a so-called dissimilarity function (Subsection <ref>), although we take the term similarity to be much more general than used in <cit.>.More precisely, we introduce dissimilarity functions being capable of detecting perfect dependence (type <ref>) or mutual perfect dependence (type <ref>) between two non-empty subsets of(Subsection <ref>) and, for each type of dissimilarity function, key properties are presented. In addition, alternative concepts of multivariate similarity available in the literature are discussed and (where possible) their relation to (mutual) perfect dependence is elaborated (Subsection <ref>). §.§ Quantifying (mutual) perfect dependence We first introduce two dependence concepts for sets of random variables that are of key importance for this work (cf. <cit.>):Perfect dependence: A set = {Y_1,…,Y_q}∈_0(), q ≥ 1, is said to be perfectly dependent on the set = {X_1,…,X_p}∈_0(), p ≥ 1, if, for every k ∈{1,…,q},there exists some measurable function f_k such that Y_k = f_k () almost surely.Mutual perfect dependence: Two sets ∈_0() and ∈_0() are said to be mutually perfectly dependent ifis perfectly dependent onandis perfectly dependent on .Perfect dependence ofonis a concept of directed dependence in which the variables within the setprovide full information about the variables within ,irrespective of the order of variables withinand irrespective of the order of variables within .Apparently, mutual perfect dependence implies perfect dependence. To uncover dependence structures with high predictive information, in what follows we employ the so-called simple extension of Azadkia & Chatterjee's rank correlation T^q introduced in <cit.>:For the q-dimensional random vector , q ∈ℕ, given the p-dimensional random vector , T^q is defined byT^q (|) := 1 - q - ∑_i=1^q T(Y_i | (,Y_i-1,…,Y_1))/q - ∑_i=1^q T(Y_i | (Y_i-1,…,Y_1))with T(Y_1|∅) :=0 and T denoting the simple measure of conditional dependence introduced in <cit.> and given byT(Y|):= ∫_((Y≥ y|) ) ^Y(y)/∫_(_{Y≥ y}) ^Y(y) .T as defined in (<ref>) fulfills T(Y|) = ∫_^p ×^p∫_[(Y≥ y |=_1)-(Y≥ y |=_2)]^2 ^Y(y) ^⊗^ (_1,_2)/2 ∫_(_{Y≥ y}) ^Y(y)and can therefore be considered either as a coefficient indicating how sensitive the conditional distribution of Y given = is onor, according to (<ref>), as an index quantifying the variability of the conditional distributions; see <cit.>. As shown in <cit.>, T^q (|) is invariant with respect to permutations of the variables within = {X_1,…,X_p}, and permutation invariance with respect to the variables within = {Y_1, …, Y_q} can be achieved by defining the mapκ^q|p (|) := 1/q!∑_σ∈ S_q T^q(σ()|)where σ():= (Y_σ_1,…,Y_σ_q) for σ=(σ_1,…,σ_q)∈ S_q , the set of permutations of {1,…,q}. κ^q|p is a measure of predictability for the q-dimensional random vectorgiven the p-dimensional random vector , i.e.,it fulfills the following properties; see, e.g., <cit.>: *0 ≤κ^q|p(|) ≤ 1.*κ^q|p(|) = 0 if and only if the variables inare independent of those in .*κ^q|p(|) = 1 if and only ifis perfectly dependent on .In addition to the aforementioned three characteristics <ref> to <ref> which are crucial,κ^q|p fulfills a number of additional desirable properties including the information gain inequality, the characterization of conditional independence, various invariance properties, a monotonicity and a continuity property, the data processing inequality, and the self-equitability; we refer to <cit.> for more background on these characteristics. Several of these properties can be transferred to the corresponding dissimilarity functions and are studied in Section <ref> below. In <cit.>, the authors introduced the so-called kernel partial correlation (KPC) coefficient, the so far only other known measure of predictability applicable to multi-outcome data, i.e., a measure satisfying <ref> to <ref>. KPC measures the maximum mean discrepancy distance between the conditional distributions ^| = and the unconditional distribution ^, and hence provides information on how much information onis gained by knowing . To the best of the authors' knowledge, only few properties of the KPC coefficient are known so far <cit.>, which is why in the present article we focus on κ^q|p.§.§ Dissimilarity functions based on (mutual) perfect dependence For examining and performing a cluster analysis based on (mutual) perfect dependence, we require the notion of a dissimilarity function:A (p,q)-dissimilarity function d^p,q based on (mutual) perfect dependence is a mapping that assigns to every pair(,) ∈ L^0(^p) × L^0(^q) a non-negative value in [0,∞) with the following properties:*Symmetry and permutation invariance: The identitiesd^p,q (,)= d^q,p (,)d^p,q (,)= d^p,q(σ_p (),σ_q()) hold for all σ_p∈ S_p andσ_q∈ S_q. *Law-invariance:d^p,q (,) = d^p,q (_1,_1) for all (,), (_1,_1) ∈ L^0(^p) × L^0(^q) such that (,) d= (_1,_1). *Multivariate similarity* based on perfect dependence: d^p,q (,) = 0 if and only if eitheris perfectly dependent onoris perfectly dependent on .*based on mutual perfect dependence: d^p,q (,) = 0 if and only if bothis perfectly dependent onandis perfectly dependent on . <ref> expresses a natural symmetry property between and within random vectors, i.e., the dissimilarity degree between two random vectors does not change when permuting the variables within each vector or when interchanging the random vectors. In a nutshell, the dissimilarity degree does depend on the information contained in the variables of a random vector, but not on how the variables are arranged (so it is a property of sets, not vectors). The property ensures that the definition of a dissimilarity function is set compatible. <ref> states that the dissimilarity function is law-invariant; see <cit.> for a more detailed discussion of these properties. <ref>, instead, specifies what we understand by similarity between two random vectors: the occurrence of a directed functional relationship between the involved random vectors, either in one direction only <ref> or in both directions <ref>. In Subsection <ref> below we discuss and briefly summarize alternative notions of multivariate similarity available in the literature and (if possible) point out their relation to (mutual) perfect dependence. Bearing in mind that independence represents the dependence structure with the least predictive information, an additional requirement to a dissimilarity function based on (mutual) perfect dependence could be to assign the maximal value to d^p,q (,) in the caseandare independent. We will elaborate on this aspect below in Remark <ref>.Following <cit.>, all the dissimilarity functions can be glued together into the following concept which allows to assign a degree of dissimilarity to any pair of random vectors, regardless of the respective dimension: An extended dissimilarity function (of degree m) is a mapd: ⋃_2 ≤ p+q ≤ m L^0(^p) × L^0(^q) → [0,∞)whose restriction d^p,q := d|_L^0(^p) × L^0(^q) to L^0(^p) × L^0(^q) is a (p,q)-dissimilarity function for all p,q ∈ℕ with 2 ≤ p+q ≤ m. Below, the different steps of an agglomerative hierarchical clustering algorithm based on an extended dissimilarity function d are presented. Recall that the overall objective is to determine a suitable partition of a given (finite) set = {X_1,…,X_m} of m ≥ 3 random variables into non-empty and non-overlapping classes. * Each random variable X_k∈ is assigned to its own class _k, k ∈{1,…,m}. * For each pair of classes _1 and _2, a degree of dissimilarity is calculated and recorded in a dissimilarity matrix.* A pair of classes exhibiting the smallest degree of dissimilarity (i.e., being most similar) is identified and merged, and the number of classes is reduced by one. * Steps 2 to 4 are repeated until the number of classes equals 1. §.§ Type <ref> and <ref> dissimilarity functions We introduce a variety of type <ref> and type <ref> dissimilarity functions, all of which are combinations of the coefficient κ defined in (<ref>) and a given merging function that aggregates the individual degrees of predictability for the two directions. For each type, we recommend a suitable aggregating function (Remark <ref>) and present key properties of the resulting dissimilarity functions (Theorem <ref>). Inspired by correlation distances <cit.>, i.e.,Kendall' tau distance and Spearman's rho distance,we aim at constructing dissimilarity functions based on the measure of predictability κ. For p,q ≥ 1 and a function A: [0,1]^2 → [0,∞),we define the mapping d_A^p,q: L^0(^p) × L^0(^q) → [0,∞) byd_A^p,q (,) := A (1-κ^p|q(|), 1-κ^q|p(|)).The mapping A aggregates the individual degrees of predictability in a desirable way.d_A^p,q is a (p,q)-dissimilarity function if and only if A is symmetric and <ref> A(0,v) = 0 = A(u,0) with A(u,v)>0 for all (u,v) ∈ (0,1]^2.<ref> A(0,0) = 0 with A(u,v)>0 for all (u,v) ∈ [0,1]^2 \{(0,0)}.Since κ^q|p is invariant under permutations within each vector, property <ref> is equivalent to A being symmetric. Further, property <ref> is immediate from the fact that κ^q|p is law invariant. Property <ref>, instead, follows from the fact that κ^q|p characterizes perfect dependence. As functions A for aggregating the individual degrees of predictability,Corollary <ref> justifies the use of bivariate copulas; for more background on copulas we refer to <cit.>. <ref> Suitable candidates for A in the case of dissimilarity functions based on perfect dependence involve copulas C being symmetric and strictly positive on (0,1]^2, so that (<ref>) readsd_C^p,q (,) := C (1-κ^p|q(|), 1-κ^q|p(|)).This includes copulas from most Archimedean families like Frank, Gumbel, Joe and Clayton copulas (the latter with positive parameter only) but also elliptical copulas like Gaussian copulas and Student t copulas. In particular, the independence copula Π and the Fréchet-Hoeffding upper bound M generate dissimilarity functions based on perfect dependence, namely,d_M^p,q (,) = min{1-κ^p|q(|), 1-κ^q|p(|)}, d_Π^p,q (,) = (1-κ^p|q(|)) (1-κ^q|p(|)).According to Corollary <ref>, the Fréchet-Hoeffding lower bound W fails to generate a proper dissimilarity function.Figure <ref> depicts the graph of function d_C^p,q for various choices of C.<ref> Suitable candidates for A in the case of dissimilarity functions based on mutual perfect dependence involve the dual of a symmetric copula. The dual of a copula C is the function C^∗: [0,1]^2 → [0,1] given by C^∗(u,v) := 1 - C(1-u,1-v), so that (<ref>) then readsd_C^∗^p,q (,) := 1- C (κ^p|q(|), κ^q|p(|)).In particular, duals of the Fréchet-Hoeffding bounds W and M and the independence copula Π generate dissimilarity functions based on mutual perfect dependence, namely,d_M^∗^p,q (,) = 1 - min{κ^p|q(|), κ^q|p(|)}, d_Π^∗^p,q (,) = 1 - κ^p|q(|)κ^q|p(|), d_W^∗^p,q (,) = 1 - max{κ^p|q(|)+κ^q|p(|)-1,0}.Another possible construction principle is to average the individual degrees of predictability appropriately, i.e.,d_ ave^p,q (,) = 1 - κ^p|q(|) + κ^q|p(|)/2which mimics but simplifies d_W^∗^p,q.Figure <ref> depicts the graph of function d_C^*^p,q for various choices of C. We resume the discussion initiated in Remark <ref>. Due to the properties of a measure of predictability,we have κ^p|q(|) = 0 if and only ifthe variables withinand those withinare independent which in turn is equivalent to κ^q|p(|)=0. Otherwise there existandwith κ^p|q(|) being arbitrarily small but κ^q|p(|) being close to 1 (see Example <ref>). From the perspective of predictive strength (and having in mind that independence represents the dependence structure with the least predictive information), it is therefore advisable that a dissimilarity function (type <ref> and <ref>) equals 1 exclusively in the case of independence.For type <ref>, we observe that d_C^p,q (,) in <ref> equals 1 if and only if the variables inare independent of those in . Thus, using copulas for constructing dissimilarity functions of type <ref> is not only justified from the perspective of similarity (here: perfect dependence) but also from the perspective of independence. In order to achieve such a characterization of independence also for dissimilarity functions of type <ref>, it is additionally to be required that A(u,v) = 1 if and only if u=1=v. This excludes duals of copulas for the construction and draws particular attention to d_ ave^p,q.[Strongly asymmetric directed dependence] Consider the random variables X_1 ∼𝒰(0,1) and X_2 = k · X_1modulus1, k ∈ℕ, whose joint distribution is depicted in Figure <ref>.Then X_2 perfectly depends on X_1 (but the variables are not mutually perfectly dependent) and straightforward but tedious calculation yieldsκ^1,1(X_1|X_2)= 1/k^2 κ^1,1(X_2|X_1)= 1 ,and hence d_Π^1,1 (X_1,X_2) = 0 and d_ave^1,1 (X_1,X_2) = 1/2-1/2 k^2.Building upon the previous discussion and the simulation study presented in Subsection <ref> we come up with the following recommendation for suitable dissimilarity functions of type <ref> and type <ref>. <ref>In Subsection <ref> we evaluate the different candidates for a dissimilarity function of type <ref> and observe that choosing copulas that are too close to the Fréchet-Hoeffding upper bound M exhibits a tendency for the clustering output to produce chains (see Figure <ref>). Therefore, we suggest using the independence copula Π as aggregating function and hence the dissimilarity function d^p,q_Π. <ref> In Subsection <ref> we evaluate the different candidates for a dissimilarity function of type <ref> and observe that choosing copulas that are too close to the Fréchet-Hoeffding lower bound W exhibits poor performance (see Figure <ref>). Therefore and in light of the discussion in Remark <ref>, we suggest using the dissimilarity function d^p,q_ ave. Figure <ref> depicts the recommended type <ref> and <ref> dissimilarity functions. The next theorem encapsulates the findings from this section. <ref> The dissimilarity function d^p,q_Π of type <ref> defined in (<ref>) fulfills * d^p,q_Π (,) ∈ [0,1].* d^p,q_Π (,) = 0 if and only if eitheris perfectly dependent onoris perfectly dependent on .* d^p,q_Π (,) = 1 if and only ifthe variables inare independent of those in .<ref> The dissimilarity function d^p,q_ave of type <ref> defined in (<ref>) fulfills * d^p,q_ave (,) ∈ [0,1].* d^p,q_ave (,) = 0 if and only ifis perfectly dependent onandis perfectly dependent on .* d^p,q_ave (,) = 1 if and only ifthe variables inare independent of those in . §.§ Comparison with alternative hierarchical variable clustering methods Alternative notions of multivariate similarity available in the literature are discussed and (where possible) their relation to (mutual) perfect dependence is elaborated. This results in a comparison of the recommended type <ref> and type <ref> dissimilarity functions (Subsection <ref>) with Φ-dependence, dissimilarity functions based on measures of multivariate concordance and dissimilarity functions based on linkage methods.§.§.§ Measures of multivariate concordance. In <cit.>, multivariate similarity is considered as equivalent to comonotonicity: Two sets of random variables ∈_0() and ∈_0() are said to be comonotonic if all the random variables within ∪ are pairwise comonotonic, i.e., for each Z_1, Z_2 ∈∪ there exists a random variable Z such that(Z_1,Z_2) d= (h_1(Z), h_2(Z)) for some increasing functions h_1, h_2. Comonotonicity is a pairwise and symmetric dependence concept in the sense that two sets of random variables are comonotonic if and only if all the pairs within their union are comonotonic; see, e.g., <cit.>. Apparently, comonotonicity implies mutual perfect dependence which in turn implies perfect dependence.We illustrate the differences between these two approaches also in terms of the clustering procedure by means of the following example. [(Mutual) perfect dependence versus comonotonicity] Consider the four dimensional random vector (X_1,X_2,X_3,X_4) with(X_1,X_2) being distributed according to the Fréchet-Hoeffding lower bound, i.e., (X_1,X_2) ∼ W,(X_3,X_4) being distributed according to the Marshall-Olkin copula with parameter(α,β) = (1,0.5) (see, e.g., <cit.>), andsuch that the two vectors (X_1,X_2) and (X_3,X_4) are independent. Figure <ref> depicts scatterplots of the two pairs of variables.According to <cit.>, we obtaind_Π^1,1 (X_1,X_2) = (1-κ^1,1(X_1|X_2))(1-κ^1,1(X_2|X_1)) = (1-1)^2 = 0 d_Π^1,1 (X_3,X_4) = (1-κ^1,1(X_3|X_4))(1-κ^1,1(X_4|X_3)) = (1-2/5)(1-1/3) = 2/5 ,and d_ave^1,1 (X_1,X_2) = 1-κ^1,1(X_1|X_2) + κ^1,1(X_2|X_1)/2 = 1 - 1 = 0 d_ave^1,1 (X_3,X_4) = 1-κ^1,1(X_3|X_4) + κ^1,1(X_4|X_3)/2 = 1 - 11/30 = 19/30 .Thus, the hierarchical clustering algorithms based on the recommended type <ref> and <ref> dissimilarity functions first cluster X_1 and X_2 and then X_3 and X_4, i.e., both algorithms prefer the dependence structure of (X_1,X_2) over that of (X_3,X_4). From the perspective of perfect dependence and mutual perfect dependence this is the correct choice as X_1 contains more predictive information about X_2 than X_3 about X_4 (and vice versa). Instead, the hierarchical clustering algorithms based on measures of concordance, such as those introduced in <cit.> that are related to Kendall's tau and Spearman's rho, clearly first cluster X_3 and X_4 and then X_1 and X_2, i.e., the algorithms prefer the less predictive but more concordant dependence structure of (X_3,X_4) over that of (X_1,X_2). This is due to the fact that measures of concordance are monotone with respect to the pointwise/concordance order and that the copula of X_3 and X_4 pointwise exceeds the copula of X_1 and X_2.In other words: the dependence structure of (X_1,X_2) is countermonotonic and hence less concordant than the dependence structure of (X_3,X_4). §.§.§ Φ-dependence. While restricting to an absolutely continuous setting, in <cit.> the authors introduce Φ-dependence,a measure being capable of characterizing both the independence of random vectors (like κ does) and the singularity of the joint dependence structure with respect to the product of the marginal dependence structures. Thus, in <cit.> multivariate similarity is considered as equivalent to the presence of a dependence structure that is singular with respect to the product of its marginals. This allows for the detection of tail dependence, in particular. Apparently, in such a setting perfect dependence implies singularity. As a consequence, dependence structures such as given in <cit.> are considered to be highly similar although the corresponding degree of predictability is rather low.We illustrate the differences between these two approaches also in terms of the clustering procedure by means of the following example. [(Mutual) perfect dependence versus Φ-dependence] Consider the four dimensional random vector (X_1,X_2,X_3,X_4) with(X_1,X_2) being distributed according to the average of the Fréchet-Hoeffding upper and lower bound, i.e., (X_1,X_2) ∼M+W2,(X_3,X_4) being distributed according to an ordinal sum of Π (see, e.g., <cit.>) with respect to the partition (0,0.3),(0.3,0.5),(0.5,0.75),(0.75,1), and such that the two vectors (X_1,X_2) and (X_3,X_4) are independent. Figure <ref> depicts scatterplots of the two pairs of variables.According to <cit.> and due to fact that the variables within (X_1,X_2) as well as within (X_3,X_4) are exchangeable, we haved_Π^1,1 (X_1,X_2) = (1-κ^1,1(X_1|X_2))(1-κ^1,1(X_2|X_1)) = (1-1/4)^2 = 9/16d_Π^1,1 (X_3,X_4) = (1-κ^1,1(X_3|X_4))(1-κ^1,1(X_4|X_3)) = (1-3/4)^2 = 1/16 ,and d_ave^1,1 (X_1,X_2) = 1-κ^1,1(X_1|X_2) + κ^1,1(X_2|X_1)/2 = 1 - 1/4 = 3/4d_ave^1,1 (X_3,X_4) = 1-κ^1,1(X_3|X_4) + κ^1,1(X_4|X_3)/2 = 1 - 3/4 = 1/4 . Thus, the hierarchical clustering algorithms based on the recommended type <ref> and <ref> dissimilarity functions first cluster X_3 and X_4 and then X_1 and X_2, i.e., both algorithms prefer the dependence structure of (X_3,X_4) over that of (X_1,X_2). From the perspective of perfect dependence and mutual perfect dependence this is the correct choice as X_3 contains more predictive information about X_4 than X_1 about X_2 (and vice versa). Instead, and according to <cit.>, the hierarchical clustering algorithm based on Φ-dependence first clusters X_1 and X_2 and then X_3 and X_4, i.e., the algorithm prefers the less predictive but more singular dependence structure of (X_1,X_2) over that of (X_3,X_4). This is due to the fact that the dependence structure of (X_1,X_2) is singular and hence attains maximum Φ-dependence.§.§.§ Linkage methods. When aiming at reducing the computation time in hierarchical variable clustering, usually linkage methods come into play. A linkage method relates the dissimilarity degree between two sets of variables 𝕏 = {X_1,…,X_p} and 𝕐 = {Y_1,…,Y_q} to the pairwise dissimilarity degrees d^1,1(X_i,Y_j) of variables X_i and Y_j from both the sets <cit.>.The three most common linkage methods are 1. Single linkage: d_single^p,q(𝕏 ,𝕐) min_ i∈{1,…,p},j∈{1,….q} d^1,1( X_i ,Y_j) 2. Average linkage: d_average^p,q(𝕏 ,𝕐) 1/p · q∑ _i=1^p∑ _j=1^q d^1,1( X_i ,Y_j) 3. Complete linkage: d_complete^p,q(𝕏 ,𝕐) max_ i∈{1,…,p},j∈{1,….q} d^1,1(X_i ,Y_j) Dissimilarity functions based on linkage methods exhibit many desirable properties, which in most cases are inherited from the underlying pairwise dissimilarity function d^1,1; we refer to <cit.> for a detailed discussion of dissimilarity functions based on linkage methods. Even though linkage methods offer some advantages, they all share the same structural drawback:They take into account solely pairwise information implying that the value of a (p,q)-dissimilarity function of two sets of variables only depends on the pairwise interrelations between the two sets. The following two examples illustrate this structural drawback also in terms of the clustering procedure: [Perfect dependence versus linkage methods] Consider the four dimensional random vector (X_1, X_2, X_3, X_4) withX_1 ∼𝒩(0,1), X_2 ∼𝒩(0,1),X_3 = X_1/2 + X_2 and X_4 = X_2 + ϵ where ϵ∼𝒩(0,1). Figure <ref> depicts the pairwise dependence structures of the four variables.To cluster the four variables, we first use the (multivariate) type <ref> dissimilarity function d^p,q_Π and then the (pairwise) type <ref> dissimilarity function d^1,1_Π in combination with the three linkage methods. The thereby obtained clustering results are presented in Figure <ref>. We observe that all methods first cluster variables X_2 and X_3. However, in the second step the hierarchical clustering algorithms based on the (pairwise) dissimilarity functions of type <ref> in combination with any linkage method then clusters X_4 with X_2 and X_3 while the (multivariate) dissimilarity function of type <ref> clusters X_1 with X_2 and X_3. We therefore find that even though the linkage method is more efficient in terms of computation time,it is not capable of recognising the perfect dependence of X_3 on (X_1,X_2), unlike the (multivariate) type <ref> dissimilarity function.It even turns out that linkage methods fail to constitute proper (multivariate) type <ref> and <ref> dissimilarity functions in the sense of Subsection <ref>. [Linkage methods fail to constitute type <ref> and <ref> dissimilarity functions] <ref>Consider d^1,1_Π, two independent random variables X_1 and X_2 and define X_3 := X_1 + X_2.Then X_3 is completely dependent on (X_1,X_2), however, d^1,1_Π(X_1,X_3)= (1-κ^1|1(X_1|X_3)_<1)(1-κ^1|1 (X_3|X_1))_<1 > 0as well as d^1,1_Π(X_2,X_3) > 0 and hence d_single^2,1((X_1,X_2),X_3)> 0 = d^2,1_Π((X_1,X_2),X_3) ,d_average^2,1((X_1,X_2),X_3) > 0 = d^2,1_Π((X_1,X_2),X_3) ,d_complete^2,1((X_1,X_2),X_3) > 0 = d^2,1_Π((X_1,X_2),X_3) .Therefore, linkage methods fail to constitute proper type <ref> dissimilarity functions.<ref>Consider d^1,1_ ave, two independent random variables X_1 and X_2 and define X_3 := X_1 and X_4 := X_2.Then (X_1,X_2) and (X_3,X_4) are mutually completely dependent, however, d_average^2,2((X_1,X_2),(X_3,X_4)) = 0.5 > 0 = d^2,2_ ave((X_1,X_2),(X_3,X_4)) , d_complete^2,2((X_1,X_2),(X_3,X_4)) = 1 > 0 = d^2,2_ ave((X_1,X_2),(X_3,X_4)) .In addition, (X_1,X_2) and X_3 fail to be mutually completely dependent, however, d_single^2,1((X_1,X_2),X_3)= 0 < d^2,1_ ave((X_1,X_2),X_3) .Therefore, linkage methods even fail to constitute proper type <ref> dissimilarity functions. In <cit.>, the authors further introduced a dissimilarity function based on the multivariate tail dependence coefficient, hence considering similarity as equivalent to the occurrence of maximum lower tail dependence.§ INVARIANCE AND CONTINUITY In this section a continuity property and various invariance properties (according to <cit.>) for the dissimilarity functions d^p,q_Π of type <ref> and d^p,q_ave of type <ref> are presented. From the invariance properties we can conclude that the dissimilarity functions at hand are dependence-based, and continuity ensures that a certain level of noise present in the data does not cause the final clustering result to deviate too much. We illustrate this resilience of d^p,q_Π and d^p,q_ave to noise via a simulation study (see Subsection <ref> below). Recall that κ^q|p defined in <ref> is a measure of predictability, i.e.,κ^q|p(|) ∈ [0,1], κ^q|p(|) equals 0 exclusively in the case the variables withinare independent of those within , and equals 1 if and only ifis perfectly dependent on . In addition and as mentioned in Section <ref>,κ^q|p fulfills a number of additional desirable properties some of which can be transferred to the corresponding dissimilarity functions d^p,q_Π of type <ref> and d^p,q_ave of type <ref>.As a first important property of type <ref> and <ref> dissimilarity functions,we show their invariance when replacing the variables withinand those withinby their individual distributional transforms, i.e., the variables can be replaced by their individual ranks. Corollary <ref> is an immediate consequence of <cit.>. Letanddenote the sets of individual distributional transforms ofand , respectively, i.e., = {F_X_1(X_1), …, F_X_p(X_p)} and = {F_Y_1(Y_1), …, F_Y_q(Y_q)}. Thend^p,q (,) = d^p,q (,) ,where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).The next result is due to <cit.> and demonstrates that the value of a dissimilarity function remains unchanged when transforming the variables withinand those withinby strictly increasing and bijective transformations. Let g_i, h_k: ↦, i ∈{1,…,p}, k ∈{1,…,q},be strictly increasing and bijective transformations,and letanddenote the sets of transformed variables, i.e., = {g_1(X_1), …, g_p(X_p)} and = {h_1(Y_1), …, h_q(Y_q)}. Thend^p,q (,) = d^p,q (,) ,where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).In a nutshell, Corollary <ref> confirms thatthe concept of dissimilarity considered here is dependence-based. Although, according to Theorem <ref>, the values 0 and 1 of the dissimilarity functions d^p,q_Π and d^p,q_ave have a clear interpretation, the meaning of d^p,q_Π and d^p,q_ave taking values in the interval (0,1) is not specified.This justifies the investigation of modes of convergence that are compatible with the introduced dissimilarity functions. Since κ^q|p depends on conditional distributions, convergence in distribution of the sequence (_n,_n)_n∈ is not sufficient for the convergence of (d^p,q_Π (,))_n∈ and (d^p,q_ave (,))_n∈; see <cit.>. A promising candidate for achieving a continuity statement has been presented in <cit.> where the authors showed that T^q (and consequently κ^q|p) is continuous with respect to the notion of conditional weak convergence going back to <cit.>. Applying these results, we show that the here considered dissimilarity functions are continuous in classes of elliptical and l_1-norm symmetric distributions. For d∈ , denote by 𝒰(^d) a class of bounded, continuous, weak convergence-determining functions mapping from ^d to ℂ.Denote byconvergence in distribution.A sequence (f_n)_n∈ of functions mapping from ^d to ℂ is said to be asymptotically equicontinuous on an open set V⊂^d , if for all ε>0 and ∈ V there exist δ(,ε)>0 and n(,ε)∈ such that whenever |-|≤δ(,ε) then |f_n()-f_n()|<ε for all n > n(,ε) .Further, (f_n)_n∈ is said to be asymptotically uniformly equicontinuous on V if it is asymptotically equicontinuous on V and the constants δ(ε)=δ(,ε) and n(ε)=n(,ε) do not depend on . Consider the (p+q)-dimensional random vector (,) and a sequence (_n,_n)_n∈ℕ of (p+q)-dimensional random vectors.Let V_1⊂ℝ^p and W_1⊂ℝ^q be open such that ℙ(∈ V_1)=1 and ℙ(∈ W_1)=1. For each choice of permutations σ∈ S_p and all i∈{2, … ,p},let further O_i⊂ℝ^i-1 and W_i⊂ℝ^q+i-1 be open such that ℙ((X_σ_1,…,X_σ_i-1) ∈ O_i) =1 andℙ((,X_σ_1,…,X_σ_i-1) ∈ W_i)=1, and for each choice of permutations τ∈ S_q and all j∈{2,… ,q},let U_j⊂ℝ^j-1 and V_j⊂ℝ^p+j-1 be open such that ℙ((Y_τ_1 ,…,Y_τ_j-1) ∈ U_j)=1 andℙ((,Y_τ_1 ,…,Y_τ_j-1) ∈ V_j)=1.If, further * (_n, _n)(,),* for all u ∈𝒰(), (𝔼[u(X_σ_1,n) | _n = ])_n∈ℕ is asymptotically equicontinuous on W_1 and, for all i∈{2, … ,p},(𝔼[u( X_σ_i,n) | (_n, X_σ_1,n ,…,X_σ_i-1,n) = ])_n∈ℕ is asymptotically equicontinuous on W_i and(𝔼[ u(X_σ_i,n) | ( X_σ_1,n,…,X_σ_i-1,n) = ])_n∈ℕ is asymptotically equicontinuous on O_i,* for all u ∈𝒰(),(𝔼[u(Y_τ_1,n) | _n = ])_n∈ℕ is asymptotically equicontinuous on V_1 and, for all j∈{2,… ,q},(𝔼[u(Y_τ_j,n) | (_n,Y_τ_1,n, …, Y_τ_j-1,n) = ])_n∈ℕ is asymptotically equicontinuous on V_j and (𝔼[u(Y_τ_j,n) | (Y_τ_1,n, …, Y_τ_j-1,n) = ])_n∈ℕ is asymptotically equicontinuous on U_j, and * F_X_i,n∘ F_X_i,n^-1( t) F_X_i∘ F_X_i^-1( t) and F_Y_j,n∘ F_Y_j,n^-1( t) F_Y_j∘ F_Y_j^-1( t) for λ-almost all t∈ ( 0,1) and for all i∈{1, … ,p} and j∈{1,…,q}, thend^p,q(_n ,_n) d^p,q( ,) ,where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).Continuity of d^p,q can be deduced from continuity of κ^q|p and κ^p|q (see <cit.>) and the fact that every copula is continuous and the average of continuous function is also continuous. As a direct consequence of Theorem <ref>, we conclude that, for elliptical distributions,the dissimilarity functions d^p,q_Π and d^p,q_ave are continuous in the scale matrix and the radial part:A random vector (,) is said to be elliptically distributed for some vector μ∈^p+q ,some positive semi-definite matrix Σ=(σ_ij)_1≤ i,j≤ p+q , and some generator ϕ_+ →, (,) ∼ℰ(μ,Σ,ϕ) for short,if the characteristic function of (,)-μ equals ϕ applied to the quadratic form t^TΣ t , i.e., φ_(,)-μ( t)=ϕ( t^TΣ t) for all t∈^p+q .For example, if ϕ(u)=exp(-u/2) , then (,) is multivariate normally distributed with mean vector μ and covariance matrix Σ .Elliptical distributions have a stochastic representation (,) μ + R A ^(k) ,where R is a non-negative random variable, A^TA=Σ is a full rank factorization of Σ , and where ^(k) is a uniformly on the unit sphere in ^k distributed random variable with k=(Σ) ;see <cit.> and <cit.> for more information on elliptical distributions.Note that the dissimilarity functions of type <ref> and <ref> are location-scale invariant (Corollary <ref>), and thus, neither depend on the centrality parameter μ nor on componentwise scaling factors. Consider the(p+q)-dimensional elliptically distributed random vectors (_n,_n) ∼ℰ(μ_n,Σ_n,ϕ_n),n∈, and (,) ∼ℰ (μ,Σ,ϕ), and assume Σ_n, n∈, and Σ are positive definite.If Σ_n →Σ and if either * ϕ_n =ϕ for all n ∈ℕ and if the radial part R associated with ϕ has a continuous distribution function, or* ϕ_n(u) →ϕ(u) for all u≥ 0 and if the radial variable R_n associated with ϕ_n has a density f_n such that * (f_n)_n∈ is asymptotically uniformly equicontinuous on (0,∞) ,* (f_n)_n∈ is pointwise bounded,then d^p,q(_n ,_n) d^p,q( ,), where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).The result is immediate from <cit.>, the fact that elliptical distributions are closed under permutations, that every copula is continuous and the average of continuous function is also continuous. The following continuity result for the normal distribution is an immediate consequence of Corollary <ref>. Consider the (p+q)-dimensional normally distributed random vectors (_n,_n) ∼𝒩(μ_n,Σ_n),n∈, and (,) ∼𝒩(μ,Σ), and assume Σ_n, n ∈, and Σ are positive definite.Then Σ_n→Σ implies d^p,q(_n ,_n) d^p,q( ,), where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).Denote by t_ν(μ,Σ) the d-variate Student-t distribution with ν>0 degrees of freedom, symmetry vector μ∈^d and symmetric, positive semi-definite (d× d)-matrix Σ . Then t_ν(μ,Σ) belongs to the elliptical class, where the radial variable has a density of the form g(t)= c[1+t/ν]^-(ν+d)/2 , which is Lipschitz-continuous with Lipschitz constant ν+d/2ν .Hence, the following result is an immediate consequence of Corollary <ref>. Consider the (p+q)-dimensional t-distributed random vectors (_n,_n) ∼ t_ν_n(μ_n,Σ_n),n∈, and (,) ∼ t_ν(μ,Σ),and assume Σ_n, n ∈, and Σ are positive definite.Then Σ_n→Σ and ν_n→ν implies d^p,q(_n ,_n) d^p,q( ,), where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).We further establish continuity within the class of l_1-norm symmetric distributions: Denote by S_d a d-variate random vector that is uniformly distributed on the unit simplex 𝒮_d = {∈^d |‖‖_1 =1} . A d-variate random vector W follows an l_1-norm symmetric distribution if there exists a nonnegative random variable R independent of S_d such that W RS_d .The following continuity result is immediate from <cit.>: Consider the (p+q)-dimensional l_1-norm symmetric random vectors (_n,_n)R_nS_p+q, n∈, and (,)R S_p+q, and assume F_R_n and F_R are continuous with F_R_n(0)=F_R(0)=0 for all n ∈. Then R_n R implies d^p,q(_n ,_n) d^p,q( ,), where d^p,q is either a dissimilarity function of type <ref> or <ref> according to (<ref>), (<ref>) or (<ref>).From the above, we may conclude that the dissimilarity functions of type <ref> and <ref> are continuous if certain conditions are satisfied. We refer to <cit.> for more results on the continuity of κ^q|p and concrete examples.§ ESTIMATIONIn the present section estimators d^p,q_Π,n and d^p,q_ave,n for the type <ref> and <ref> dissimilarity functions d^p,q_Π and d^p,q_ave are introduced, both of which rely on the nearest neighbor based estimator κ^q|p_n for κ^q|p defined by (<ref>) and presented in <cit.>. The properties of κ^q|p_n imply strong consistency and a computation time of O(n log n) for the estimators d^p,q_Π,n and d^p,q_ave,n, respectively. Consider a (p+q)-dimensional random vector (,) and i.i.d. copies (_1,_1), …, (_n,_n).Recall that (,) is assumed to have non-degenerate components, i.e.,none of the random variables withinordoes follow a one-point distribution. As estimators for d^p,q_Π and d^p,q_ave we propose the statistics d^p,q_Π,n and d^p,q_ave,n given byd^p,q_Π,n:= (1 - κ^p|q_n (|)) (1 - κ^q|p_n(|)) d^p,q_ave,n:= 1 - κ^p|q_n (|) + κ^q|p_n(|)/2with κ^q|p_n being the estimator proposed in <cit.> and given by the following plug-in construction principleκ^q|p_n(|) := 1/q!∑_σ∈ S_q T^q_n(σ()|) T^q_n(|)= 1 - q - ∑_i=1^q T_n(Y_i | (,Y_i-1,…,Y_1))/q - ∑_i=1^q T_n(Y_i | (Y_i-1,…,Y_1)) ,such that, for an i.i.d. sample (_1,Y_1), …, (_n,Y_n) of a (d+1)-dimensional random vector (,Y), d ≥ 1,T_n(Y|)= ∑_k=1^n (nmin{R_k,R_N(k)}-L_k^2)/∑_k=1^n L_k(n - L_k) ,where R_k denotes the rank of Y_k among Y_1, …, Y_n, i.e., the number of j such that Y_j ≤ Y_k,and L_k denotes the number of j such that Y_j ≥ Y_k (cf. <cit.>). For each k, the number N(k) denotes the index l such that _l is the nearest neighbour of _k with respect to the Euclidean metric on ℝ^d. Since there may exist several nearest neighbours of _k ties are broken at random. From the definitions of d^p,q_Π,n and d^p,q_ave,n in (<ref>),it is immediately apparent that the estimators are built on the dimension reduction principle revealed in <cit.> (see also <cit.>) which is key to a fast estimation of d^p,q_Π,n and d^p,q_ave,n. In <cit.>, the authors proved that κ^q|p_n in (<ref>) is a strongly consistent estimator for κ^q|p. As a direct consequence, we obtain strong consistency of d^p,q_Π,n and d^p,q_ave,n. It holds that lim_n →∞ d^p,q_Π,n = d^p,q_Π almost surely and lim_n →∞ d^p,q_ave,n = d^p,q_ave almost surely.* From the properties of the estimator κ^q|p_n, see <cit.>, it follows that d^p,q_Π,n and d^p,q_ave,n can be computed in O(n log n) time.* The estimators d^p,q_Π,n and d^p,q_ave,n are model-free with no tuning parameters and are consistent under sparsity assumptions. § SIMULATION STUDY AND REAL DATA EXAMPLES In an initial simulation study (Subsection <ref>)the different candidates for type <ref> and type <ref> dissimilarity functions are evaluated, and it is found that choosing copulas as aggregating functions that are too close to the Fréchet-Hoeffding upper bound M or too close to the Fréchet-Hoeffding lower bound W, respectively, results in poor performance. Therefore, we suggest using d^p,q_Π as type <ref> and d_ave^p,q as type <ref> dissimilarity function. The agglomerative hierarchical variable clustering methods based on d^p,q_Π and d_ave^p,q are then compared to alternative hierarchical variable clustering methods available in the literature (Subsection <ref>),their resilience to noise is elaborated (Subsection <ref>), and their performance is tested when the number of random variables to be clustered is large (Subsection <ref>). §.§ Evaluation of type <ref> and type <ref> dissimilarity functions We start with an empirical analysis and consider a data set of bioclimatic variables for n=1,862 locations homogeneously distributed over the global landmass fromCHELSA (Climatologies at high resolution for the earth’s land surface areas, <cit.>). The variables in this data set can be split into two groups:thermal variables and precipitation based variables (for details see Table <ref>), making the data set particularly well suited for a cluster analysis. We are interested in whether the various candidates presented in Subsection <ref> for choosing a dissimilarity function (both type <ref> and <ref>) behave comparable or whether differences can be identified. Therefore, three thermal variables {AMT, MTCQ, MTDQ} and three precipitation-based variables {AP, PCQ, PDQ} are selected and an agglomerative hierarchical clustering is performed, for which we expect a successful partitioning into the two groups of variables {AMT, MTCQ, MTDQ} and {AP, PCQ, PDQ}. For the evaluation of type <ref> dissimilarity functions, we employ d_C^p,q where C belongs to either the Gaussian or the Gumbel copula family for varying parameter values.From Figure <ref> we observe that, for either copula family, from a certain parameter threshold onwards a tendency to form chains occurs in the clustering output. To confirm this observation, we conduct the investigation for three additional copula families (Clayton copula, Frank copula and Joe copula) and can conclude that, for each of these copula families, there is a tendency to form a chain when a higher parameter is chosen, i.e., when the aggregating copula is close to the Fréchet-Hoeffding upper bound M. Table <ref> presents, for this particular data set, thresholds for the parameter values and the Kendall's tau of the different copula families from which a tendency to form a chain emerges. We observe that the Frank family makes the clustering result most prone to produce a chain, while the Gumbel family exhibits the least tendency for producing a chain.For the evaluation of type <ref> dissimilarity functions we employ d_C^*^p,q with the Gaussian copula family for varying parameter values and d^p,q_ave. From Figure <ref> we observe that almost no difference in the clustering output for type <ref> dissimilarity functions occurs.In almost all the cases, i.e., when using d_C^*^p,q with Gaussian copula C (parameter ≥ -0.99) or d_ave^p,q, the six variables are successfully partitioned into the two clusters {AMT, MTCQ, MTDQ} and {AP, PCQ, PDQ}. An important observation is the somewhat unsatisfactory clustering result when choosing the dissimilarity function d_C^*^p,q for the Gaussian copula with parameter -1, which here represents the Fréchet-Hoeffding lower bound W. We now confirm and substantiate the observations just made about the behaviour of the various candidates for choosing a dissimilarity function through a simulation study. Therefore, consider three sets of random variables {N_1, N_2, N_3}, {C_1, C_2, C_3} and {G_1, G_2, G_3} distributed according to different copulas, namely Gaussian copula with Kendall's tau τ = 0.15, Clayton copula with Kendall's tau τ = 0.3 and Gumbel copula with Kendall's tau τ = 0.45.An agglomerative hierarchical clustering for all nine variables using different dissimilarity functions (both type <ref> and <ref>) is performed, for which we expect a successful partitioning into the three groups.From Figures <ref> and <ref> we observe that in most cases the nine variables can be successfully clustered. Still, Figure <ref> depicts a tendency of type <ref> dissimilarity functions to form chains when the copula used for aggregating the two degrees of predictability is too close to (here: equals) the Fréchet-Hoeffding upper bound M. Complementing what we observed before, the type <ref> dissimilarity function d_C^*^p,q, where C is a Gaussian copula, performs poorly whenever the parameter of the Gaussian copula is too small, i.e., whenever C is too close to the Fréchet-Hoeffding lower bound W. Unlike d_C^*^p,q, d_ave^p,q performs well on this data set:The nine variables are correctly clustered into three groups, where the three variables generated from the set {G_1, G_2, G_3} with highest Kendall's tau are clustered first, while the three variables generated from the set {N_1, N_2, N_3} with the lowest Kendall's tau are clustered last.Building upon the above empirical analysis and simulation study, we conclude that for type <ref> dissimilarity functions it seems advisable to avoid copulas as aggregating functions that are too close to the Fréchet-Hoeffding upper bound M, otherwise the dendrogram is very likely to return a chain. Therefore and due to its simple structure and appealing performance, we recommend (and in what follows also use) the independence copula Π as aggregating function and hence the dissimilarity function d_Π^p,q. Instead, for type <ref> dissimilarity functions, it seems advisable to avoid copulas as aggregating functions that are too close to the Fréchet-Hoeffding lower bound W, as they exhibit poor performance. Therefore and due to its simple structure, appealing performance and the ability to characterize independence (Theorem <ref>), we recommend (and in what follows also use) the dissimilarity function d_ave^p,q. §.§ A simulation study comparing different hierarchical variable clustering methods We now discuss alternative hierarchical variable clustering methods available in the literature (based on different notions of multivariate similarity, cf. Subsection <ref>),including those based on Φ-dependence <cit.> and measures of multivariate concordance <cit.>, and elucidate their relation to hierarchical variable clustering based on (mutual) perfect dependence. As type <ref> and type <ref> dissimilarity functionswe employ d_Π^p,q and d_ave^p,q, respectively,as measure of multivariate concordance we employ multivariate Spearman's footrule <cit.>, while Φ-dependence relies on mutual information <cit.>(measured by the Kullback–Leibler divergence <cit.>). We employ the four above-mentioned hierarchical variable clustering methods for clustering five variables among which different types of association occur, including linear and nonlinear relationships as well as bivariate and multivariate dependencies.Therefore, consider the set of random variables = {X_1,X_2,X_3,X_4,X_5} withX_1 and X_2 being obtained from simulating a 2-dimensional t-copula with correlation parameter ρ=0 and ν=0.1 degrees of freedom, X_3=-exp(X_2)+ϵ_1 (ϵ_1 ∼𝒩(0, 0.2^2)), X_4=log(-X_3)+ϵ_2 (ϵ_2∼𝒩(0, 1)), and X_5=-sin(1.5X_4+0.5X_2);Figures <ref> and <ref> depict and describe the (pairwise) dependence structures within , along with the expected clustering result if the focus lies on (mutual) perfect dependence. As depicted in Figure <ref> (a), there is a strong nonfunctional relationship between X_1 and X_2, a strong bi-directional relationship between X_2 and X_3, and a strong uni-directional relationship between X_4 and X_5.Weak dependence appears between X_2 and X_5 and between X_3 and X_4.Before comparing the results obtained via the various hierarchical variable clustering methods, we first examine ways of determining an optimal number of clusters and hence an optimal partition. At this point, recall that the focus of this paper is on clustering random variables and not data points, so that many optimality criteria known in literature are not applicable. Following <cit.>,here an optimal partition is understood as one that maximizes the intra-cluster similarity and minimizes the inter-cluster similarity.The former criterion describes the homogeneity within the individual clusters and can be determined using the so-called average diameter (Adiam) <cit.> which, for a given partition into disjoint clusters {_1, …, _l}, l≥ 2, equals the arithmetic mean over the diameters diam (_k) of the individual clusters _k, k ∈{1,…,l},where diam () = X,Y ∈min{1-d^1,1(X,Y)} if|| > 1 1if|| = 1 ,and || denotes the cardinality of .A larger Adiam indicates a higher homogeneity. The latter criterion describes the separation of the different clusters and can be determined using the so-called maximum split (Msplit) <cit.>, the maximum over all split (_k) of the individual clusters _k, k ∈{1,…,l}, where split () = X ∈, Y ∈\max{1- d^1,1(X,Y)}.A smaller Msplit indicates a higher separation. For partitioning, we favour a high similarity within individual clusters and a low similarity among different clusters.Since during the hierarchical clustering procedure the homogeneity within the individual clusters decreases and the separation of the different clusters increases, an optimal partition comes at the best trade off of both quantities (see Figure <ref> for an illustration). Both criteria, average diameter and maximum split, use solely pairwise information of the random variables involved. A more accurate but computationally demanding approach is to extend the definition of diam () to the maximum over all disjoint subsets ^∗, ^∗⊆ and that of split () to the minimum over all (disjoint) subsets ^∗⊆ and ^∗⊆\. Another criterion for determining an optimal number of clusters is the so-calledSilhouette coefficient (SC) going back to <cit.> and which is defined as a mixed coefficient involving the inter-cluster and intra-cluster dissimilarities. An optimal number of clusters is then determined by computing the so-called silhoutte value for each object (here random variable), a value evaluating to what extend the considered object is clustered correctly, and averaging over all these values (see Figure <ref> for an illustration). From Figure <ref> we observe that the dendrograms obtained via type <ref> and <ref> dissimilarity functions are quite similar with only one slight, but important and valid difference being that, in the case of type <ref>, variables X_4 and X_5 are clustered first whereas, in the case of type <ref>, X_2 and X_3 are the first to cluster.No matter which of the two dissimilarity functions is used, X_1 is the last to be clustered. The optimal partitions for type <ref> and type <ref> are identical with the same optimal number of clusters, regardless which optimality criterion is used: the five variables are partitioned into three clusters { X_1}, { X_2, X_3}, and { X_4, X_5}, as expected (see Figure <ref>). Instead, if Φ-dependence is used, then the clustering result obtained is different from (but related to) the one above in such that X_1 is clustered together with { X_2, X_3}. No matter which criterion is used, the optimal number of cluster is two resulting in the optimal partition { X_1, X_2, X_3} and { X_4, X_5}. In contrast to the three above observed and related optimal partitions, the clustering result obtained via multivariate Spearman's footrule exhibits no structural resemblance. Even the optimal number of clusters is not unique: If Adiam and Msplit are used, the optimal partition is the finest partition, with each individual variable as a separate cluster. If instead the Silhouette coefficient is used then the optimal number of clusters is 2. However, notice that the silhouette coefficient in this case is rather low meaning that the obtained partition may fail to be reasonable.As seen in Figure <ref>, the optimal number of clusters obtained via the silhouette coefficient tends to match the optimal number achieved with Adiam and Msplit,at least in those situations where the silhouette coefficient is large enough. If, however, the silhouette coefficient is too small, it seems more advisable to rely on Adiam and Msplit for determining an optimal number of clusters. This can be justified by the fact that in such a situation the silhouette coefficient is too close to 0 being the value reached for the finest partition.Since concordance measures such as multivariate Spearman's footrule are only capable of detecting monotone relationships among variables, the clustering result obtained for the given data is not very helpful. In contrast, directed dependence concepts (such as type <ref> and type <ref>) and Φ-dependence are capable of detecting non-monotone relationships, resulting in convincing clustering results for the given data set. However, it is important to recognise that the concepts considerably differ in terms of the extent to which functional and non-functional relationships are to be identified (cf. Subsection <ref>), and thus also in terms of the clustering result achieved for the given data set.§.§ A simulation study about noise resistance Building upon the continuity of type <ref> and <ref> dissimilarity functions as shown in Section <ref>, we now illustrate the resilience of the corresponding agglomerative hierarchical clustering algorithms to noise. Therefore, consider the set of random variables = {X_1,X_2,X_3,X_4,X_5,X_6} withX_1 ∼𝒩(0,1) X_2 = X_1^2 + X_1 + ε_2, X_3 ∼𝒩(0,1), X_4 = exp(-X_3) + ε_4,X_5 = X_4 + sin(X_3) + ε_5 and X_6 ∼𝒩(0,1), where ε_2, ε_4, ε_5 ∼𝒩(0,σ^2) with varying variance σ≥ 0.When focusing on (mutual) perfect dependence, the benchmark classification is given by:Cluster 1 = { X_1, X_2}, Cluster 2 = { X_3, X_4, X_5}, and Cluster 3 = {X_6}. For the case when no noise is present in the data, i.e., σ=0,Figure <ref> depicts the clustering results obtained via type <ref> and <ref> dissimilarity functions.There, it can be seen that the five variables are successfully clustered and the optimal partition coincides with the benchmark partition.In what follows, we study the performance of the agglomerative hierarchical clustering algorithms when more and more noise is added to the data, i.e., when σ increases. Thereby, we first analyse the resilience to noise of the clustering result and, in a second step, the resilience to noise of the optimality criteria. Before that, however, let us briefly review tools that allow a comparison of the optimal partition obtained via the hierarchical clustering algorithms with the benchmark partition. For determining the degree of congruence between two partitions,two popular indices exist: the Rand index (RI) <cit.> and the Fowlkes–Mallows index (FMI) <cit.>. Both, Rand index and Fowlkes-Mallows index, rely on the number of same or different variables in the two partitions A_1, A_2.More precisely, when defining- TP as the number of pairs of objects that are in the same cluster in A_1 and in the same cluster in A_2,- FP as the number of pairs of objects that are in the same cluster in A_1 and in different clusters in A_2,- FN as the number of pairs of objects that are in different clusters in A_1 and in the same cluster in A_2,- TN as the number of pairs of objects that are in different clusters in A_1 and in different clusters in A_2,Rand index (RI) and Fowlkes-Mallows index (FMI) are given as follows: * RI = TP + TN/TP + FP + FN + TN,* FMI = √(TP/TP+FPTP/TP+FN).RI and FMI range from 0 to 1. A higher value for RI or FMI indicates a larger degree of congruence between the two partitions. Resilience to noise regarding the clustering result: In a first simulation study, we investigate how much noise can be added to the data without changing the resulting partition. By following the benchmark partition we thereby set the optimal number of clusters to 3. For several choices of σ, we then compare the resulting partition with the benchmark partition and calculate their degree of congruence by means of the Rand index (RI) and the Fowlkes-Mallows index (FMI) and repeat each szenario B = 100 times. Figure <ref> depicts boxplots of the obtained values for RI and FMI. From Figure <ref> we observe that up to σ=4 the clustering partition obtained via type <ref> and type <ref> dissimilarity functions perfectly match the benchmark partition, revealing a high resilience to noise for both procedures.Resilience to noise regarding the optimal number of clusters: In a second simulation study we examine how much noise can be added to the data without changing the optimal number of clusters found for the benchmark partition. Therefore, for several choices of σ, we compute the optimal number of clusters obtained (1) via the best trade off between average diameter and maximum split defined by (<ref>) and (<ref>) (bivariate/pairwise version), (2) via the best trade off between average diameter and maximum split as described in Remark <ref> (multivariate version) and (3) via the Silhouette coefficient, and repeat the procedure B = 100 times. Figure <ref> depicts boxplots of the obtained optimal number of clusters. From Figure <ref> we observe that, compared to the clustering result, the optimality criteria are less resistant to noise no matter which dissimilarity function is used (type <ref> or type <ref>). Interestingly, the Silhouette coefficient performs more stable (up to σ=2.3) than average diameter and maximum split (pairwise, up to σ=0.3) and average diameter and maximum split (multivariate, up to σ=0.4), while, surprisingly, the latter two optimality criteria do not differ much, which is in favour of the pairwise version in terms of computation time. §.§ A simulation study about clustering a larger number of variables Finally, we illustrate the performance of the agglomerative hierarchical clustering algorithm based on type <ref> and type <ref> dissimilarity functions when the number of random variables to be clustered is large. Therefore, consider the set of random variables = _A ∪_B ∪_C ∪_D consisting of four independent sets of variables, namely_A = {A_1, …, A_5}, _B = {B_1, …, B_5}, _C = {C_1, …, C_5}, _D = {D_1, …, D_5}. Given α∈{0.4, 0.6, 0.8, 1},each of the four sets is simulated from a 5-dimensional Clayton copula with Kendall's tau value τ_A = α· 0.2, τ_B = α· 0.4, τ_C = α· 0.6, and τ_D = α· 0.8, respectively.We then compare the resulting partition with the benchmark partition and calculate their degree of congruence by means of the Rand index (RI) and the Fowlkes-Mallows index (FMI) and repeat each szenario B = 100 times.Figure <ref> depicts boxplots of the obtained values for RI and FMI, and Figure <ref> shows the results for a single run. We observe that, regardless of whether type <ref> or <ref> dissimilarity function is used, the clustering result becomes more and more accurate (compared to the benchmark partition) as α increases.Clearly, the stronger the correlation between the variables (greater Kendall tau value of the copula) the earlier in the clustering process the variables are grouped into a cluster.Acknowledgement: Both authors gratefully acknowledge the support of the Austrian Science Fund (FWF) project P 36155-N ReDim: Quantifying Dependence via Dimension Reduction and the support of the WISS 2025 project 'IDA-lab Salzburg' (20204-WISS/225/197-2019 and 20102-F1901166-KZP). 10 url<#>1urlprefixURL href#1#2#2 #1#1koch_analysis_2013 I. Koch, Analysis of Multivariate and High-Dimensional Data, Cambridge University Press, 2013. https://doi.org/10.1017/CBO9781139025805 doi:10.1017/CBO9781139025805.bonanno_2004 B. Bonanno, G. Caldarelli, F. Lillo, S. Micciché, N. Vandewalle, R. Mantegna, Networks of equities in financial markets, Eur. Phys. J. B. 38 (2004) 363–371.everitt_cluster_2011 B. Everitt, S. Landau, M. Leese, D. Stahl, Cluster Analysis, Vol. 5th, John Wiley & Sons, 2011. https://doi.org/10.1002/9780470977811 doi:10.1002/9780470977811.fuchs_dissimilarity_2021 S. Fuchs, F. M. L. Di Lascio, F. Durante, Dissimilarity functions for rank-invariant hierarchical clustering of continuous variables, Comput. Statist. Data Anal. 159 (2021) 107201. https://doi.org/10.1016/j.csda.2021.107201 doi:10.1016/j.csda.2021.107201.son_modified_2008 Y. S. Son, J. Baek, A modified correlation coefficient based similarity measure for clustering time-course gene expression data, Pattern Recognit. Lett. 29 (3) (2008) 232–242. https://doi.org/10.1016/j.patrec.2007.09.015 doi:10.1016/j.patrec.2007.09.015.bonanomi_defining_2017 A. Bonanomi, M. Nai Ruscone, S. A. Osmetti, Defining subjects distance in hierarchical cluster analysis by copula approach, Qual. Quant. 51 (2) (2017) 859–872. https://doi.org/10.1007/s11135-016-0444-9 doi:10.1007/s11135-016-0444-9.de_luca_tail_2011 G. De Luca, P. Zuccolotto, A tail dependence-based dissimilarity measure for financial time series clustering, Adv. Data Anal. Classif. 5 (4) (2011) 323–340. https://doi.org/10.1007/s11634-011-0098-3 doi:10.1007/s11634-011-0098-3.DiLDurPap17RBN F. M. L. Di Lascio, F. Durante, R. Pappadà, Copula–based clustering methods, in: M. Úbeda-Flores, E. de Amo, F. Durante, J. Fernández-Sánchez (Eds.), Copulas and Dependence Models with Applications, Springer, 2017, pp. 49–67.DurPapTor14ADAC F. Durante, R. Pappadà, N. Torelli, Clustering of financial time series in risky scenarios, Adv. Data Anal. Classif. 8 (2014) 359–376.banerjee_clustering_2005 A. Banerjee, S. Merugu, I. S. Dhillon, J. Ghosh, Clustering with Bregman Divergences, J. Mach. Learn. Res. 6 (2005) 1705–1749.emmert-streib_information_2008 F. Emmert-Streib, M. Dehmer, Information Theory and Statistical Learning, Springer, 2008. https://doi.org/10.1007/978-0-387-84816-7 doi:10.1007/978-0-387-84816-7.jiang_clustering_2013 B. Jiang, J. Pei, Y. Tao, X. Lin, Clustering uncertain data based on probability distribution similarity, IEEE Trans. Knowl. Data Eng. 25 (4) (2013) 751–763. https://doi.org/10.1109/TKDE.2011.221 doi:10.1109/TKDE.2011.221.kojadinovic_agglomerative_2004 I. Kojadinovic, Agglomerative hierarchical clustering of continuous variables based on mutual information, Comput. Statist. Data Anal. 46 (2) (2004) 269–294. https://doi.org/10.1016/S0167-9473(03)00153-1 doi:10.1016/S0167-9473(03)00153-1.kraskov_hierarchical_2005 A. Kraskov, H. Stögbauer, R. G. Andrzejak, P. Grassberger, Hierarchical clustering using mutual information, Europhys. Lett. 70 (2) (2005) 278. https://doi.org/10.1209/epl/i2004-10483-y doi:10.1209/epl/i2004-10483-y.martinez_sotoca_supervised_2010 J. Martínez Sotoca, F. Pla, Supervised feature selection by clustering using conditional mutual information-based distances, Pattern Recognit. 43 (6) (2010) 2068–2081. https://doi.org/10.1016/j.patcog.2009.12.013 doi:10.1016/j.patcog.2009.12.013.yang_novel_2019 J. Yang, E. Grunsky, Q. Cheng, A novel hierarchical clustering analysis method based on Kullback–Leibler divergence and application on dalaimiao geochemical exploration data, Comput. Geosci. 123 (2019) 10–19. https://doi.org/10.1016/j.cageo.2018.11.003 doi:10.1016/j.cageo.2018.11.003.Dhaetal02a J. Dhaene, M. Denuit, M. J. Goovaerts, R. Kaas, D. Vyncke, The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31 (1) (2002) 3–33.PucWan15 G. Puccetti, R. Wang, Extremal dependence concepts, Statist. Sci. 30 (4) (2015) 485–517.gijbels2023 S. De Keyser, I. Gijbels, Hierarchical variable clustering via copula-based divergence measures between random vectors, Internat. J. Approx. Reason. 165 (2023) 109090. https://doi.org/https://doi.org/10.1016/j.ijar.2023.109090 doi:https://doi.org/10.1016/j.ijar.2023.109090.deb2020 Z. Huang, N. Deb, B. Sen, Kernel partial correlation coefficient — a measure of conditional dependence, J. Mach. Learn. Res. 23 (216) (2022) 1–58.ansari_simple_2023 J. Ansari, S. Fuchs, A simple extension of Azadkia & Chatterjee's rank correlation to a vector of endogenous variables (2023) Available at <arxiv.org/abs/2212.01621>.azadkia_simple_2021 M. Azadkia, S. Chatterjee, A simple measure of conditional dependence, Ann. Statist. 49 (6) (2021) 3070–3102. https://doi.org/10.1214/21-AOS2073 doi:10.1214/21-AOS2073.chatterjee2021 S. Chatterjee, A new coefficient of correlation., J. Amer. Statist. Ass. 116 (2021) 2009–2022.lancaster1963 H. O. Lancaster, Correlation and complete dependence of random variables, Ann. Math. Statist. 34 (1963) 1315–1321.ansari_sens_2023 J. Ansari, P. B. Langthaler, S. Fuchs, W. Trutschnig, Quantifying and estimating dependence via sensitivity of conditional distributions (2023) Available at <arxiv.org/abs/2308.06168>.durante_principles_2015 F. Durante, C. Sempi, Principles of Copula Theory, CRC Press, Boca Raton FL, 2016.Joe15 H. Joe, Dependence modeling with copulas, CRC Press, Boca Raton, FL, 2015.nelsen_introduction_2007 R. B. Nelsen, An Introduction to Copulas, Springer, 2007.puccetti2010 G. Puccetti, M. Scarsini, Multivariate comonotonicity, J. Multivariate Anal. 101 (4) (2010) 291–304.fuchs_quantifying_2022 S. Fuchs, Quantifying directed dependence via dimension reduction, J. Multivariate Anal. (2023) In Press.https://doi.org/10.48550/arXiv.2112.10147 doi:10.48550/arXiv.2112.10147.Sweeting_1989 T. J. Sweeting, On conditional weak convergence, J. Theor. Probab. 2 (4) (1989) 461–474.Cambanis-1981 S. Cambanis, S. Huang, G. Simons, On the theory of elliptically contoured distributions, J. Multivariate Anal. 11 (1981) 368–385.Fang-1990 K.-T. Fang, S. Kotz, K.-W. Ng, Symmetric Multivariate and Related Distributions, Chapman and Hall, London, 1990.karger_climatologies_2017 D. N. Karger, O. Conrad, J. Böhner, T. Kawohl, H. Kreft, R. W. Soria-Auza, N. E. Zimmermann, H. P. Linder, M. Kessler, Climatologies at high resolution for the earth’s land surface areas, Scientific Data 4 (1) (2017) 170122. https://doi.org/10.1038/sdata.2017.122 doi:10.1038/sdata.2017.122.perez_nonparametric_2023 A. Pérez, M. Prieto-Alaiz, F. Chamizo, E. Liebscher, M. Úbeda Flores, Nonparametric estimation of the multivariate Spearman's footrule: A further discussion, Fuzzy Set. Syst. 467 (2023) 108489. https://doi.org/10.1016/j.fss.2023.02.010 doi:10.1016/j.fss.2023.02.010.fuchs2019spearman S. Fuchs, Y. McCord, On the lower bound of Spearman's footrule, Depend. Model. 7 (2019) 121–129.kullback_information_1951 S. Kullback, R. A. Leibler, On information and sufficiency, Ann. Math. Stat. 22 (1) (1951) 79–86. https://doi.org/10.1214/aoms/1177729694 doi:10.1214/aoms/1177729694.hansen_cluster_1997 P. Hansen, B. Jaumard, Cluster analysis and mathematical programming, Math. Program. 79 (1) (1997) 191–215. https://doi.org/10.1007/BF02614317 doi:10.1007/BF02614317.leonard_kaufman_finding_1990 L. Kaufman, Finding Groups in Data, John Wiley & Sons, 1990.rand_objective_1971 W. M. Rand, Objective criteria for the evaluation of clustering methods, J. Amer. Statist. Assoc. 66 (336) (1971) 846–850. https://doi.org/10.2307/2284239 doi:10.2307/2284239.fowlkes_method_1983 E. B. Fowlkes, C. L. Mallows, A method for comparing two hierarchical clusterings, J. Amer. Statist. Assoc. 78 (383) (1983) 553–569. https://doi.org/10.1080/01621459.1983.10478008 doi:10.1080/01621459.1983.10478008.§ SUPPLEMENTARY MATERIAL: TABLES AND FIGURES | http://arxiv.org/abs/2312.16544v1 | {
"authors": [
"Sebastian Fuchs",
"Yuping Wang"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20231227115948",
"title": "Hierarchical variable clustering based on the predictive strength between random vectors"
} |
Lyapunov-Krasovskii Functionals ofRobust Typefor the Stability Analysis in Time-Delay Systems footnoteinfo [ January 14, 2024 ================================================================================================================empty emptyKnowledge about historic landslide event occurrence is important for supporting disaster risk reduction strategies. Building upon findings from 2022 Landslide4Sense Competition, we propose a deep neural network based system for landslide detection and segmentation from multisource remote sensing image input. We use a U-Net trained with Cross Entropy loss as baseline model. We then improve the U-Net baseline model by leveraging a wide range of deep learning techniques. In particular, we conduct feature engineering by generating new band data from the original bands, which helps to enhance the quality of remote sensing image input. Regarding the network architecture, we replace traditional convolutional layers in the U-Net baseline by a residual-convolutional layer. We also propose an attention layer which leverages the multi-head attention scheme. Additionally, we generate multiple output masks with three different resolutions, which creates an ensemble of three outputs in the inference process to enhance the performance. Finally, we propose a combined loss function which leverages Focal loss and IoU loss to train the network. Our experiments on the development set of the Landslide4Sense challenge achieve an F1 score and an mIoU score of 84.07 and 76.07, respectively. Our best model setup outperforms the challenge baseline and the proposed U-Net baseline, improving the F1 score/mIoU score by 6.8/7.4 and 10.5/8.8, respectively.Items— Convolutional neural network, landslide, remote sensing image.§ INTRODUCTIONNatural hazards pose a severe threat to the lives of people around the world. In particular, landslides are a major cause of losses in mountainous areas <cit.>.Knowledge about historic landslide event occurrence is of core importance in the context of quantitative risk assessment, which in turn supports the design and implementation of effective disaster risk reduction strategies. Several methodological approaches are used for detecting and mapping different types of landslides. In addition to manual visual interpretation, different automated methods that leverage different types of data sets have been developed. Most notably, these methods includethe analysis of digital terrain models derived through airborne laserscanning, e.g. by using geographic object-based image analysis <cit.> or LiDAR altimetry <cit.>, the analysis of aerial photographs <cit.>, or various change detection methods applied to multi-spectral or SAR data <cit.>.While these methods are tried and tested, the rapid technological development in intersection of remote sensing imagery and image segmentation using increasingly advanced neural network architectures has opened up new possibilities for landslide detection and mapping compared to the conventional methods <cit.>.The availability of free multi-spectral remote sensing imagery from the satellites, combined with advances in computer vision and machine learning, enables the development of automated landslide detection and segmentation frameworks at comparably low cost.Recent attempts at developing such systems, which are based on deep neural network architectures such as U-Net, DeepLab, Transformers <cit.> or on adapted pre-trained models such as variants of ResNet or EfficientNet <cit.>, have presented very promising results. Most of the published systems were based on dedicated datasets collected by the authors <cit.> or onsynthetic datasets <cit.>. As a result, these datasets only reflect the landslide events of a certain region, which leads to certain limitations in the developed landslide detection systems. The Landslide4Sense dataset published by Ghorbanzadeh et al. in 2022 <cit.> constitutes an interesting and large dataset aimed at landslide detection and segmentation. The data set mainly consists ofmulti-spectral remote sensing images from Sentinel-2 and (presumably) elevation information as used in the ALOS PALSAR RTC products (i.e., SRTM and NED DEM with geoid correction applied)[This information is not really clear from Ghorbanzadeh et al. (2022), who misleadingly state that "DEM and slope data from ALOS PALSAR" <cit.> were used.].Based on the Landslide4Sense dataset, we present a deep neural network based system for landslide detection and segmentation, including the following specific improvements over the benchmark results <cit.>: * We first conducted an analysis on how to improve the quality of input remote sensing images by using multiple techniques of data augmentation (random rotation, cutmix) and feature engineering techniques (RGB normalization, feature combination, Gaussian filters, gradient image, Canny Edge detector). * Second, we improve of the U-Net architecture by proposing a residual-convolutional layer and an attention layer.* Third, we propose a combination of multi-resolution segmentation heads with multiple loss functions, which also helps to improve model performance.§ DATASET AND METHODOLOGICAL BACKGROUND §.§ Landslide4Sense dataset The benchmark Landslide4Sense dataset <cit.> comprises three main subsets: the development set, the evaluation set, and the test set. While the development set was published with the labels, no labels have been provided for both evaluation set and test set as these subsets were used for the competition challenge <cit.>. Therefore, only the development set of the Landslide4Sense dataset is considered in this paper. This development set comprises 3799 multi-spectral images which were collected from the open source Sentinel-2 <cit.> and supplemented with information from ALOS PALSAR. Each of multi-spectral image presents 14 bands: multi-spectral data from Sentinel-2 (B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12); slope data from ALOS PALSAR (B13); and elevation data (DEM) from ALOS PALSAR (B14). All bands in the dataset have an image size of 128×128. The original spatial resolution of the single bands varies according to the resolution of the source spectral bands of the MSI aboard Sentinel-2: B1, B9 and B10's resolution is 60m, B2 to B4 and B8 were captured at a resolution of 10m per pixel, and B5-B7, B11 and B12 have a resolution of 20m.As a result, each of multi-spectral images is an array of shape 128×128×14.One multi-spectral image of 128×128×14 comes with a label image of 128×128, referred to as the ground truth mask. The ground truth mask presents a binary image in which landslide pixels and non-landslide pixels are marked by one and zero values, respectively.Although approximately 58% of images in the Landslide4Sense development set contains landslide labels, the landslide pixels are minority with only 2.3% of all pixels being labeled as events.Additionally, the ratio of landslide pixels over an image presents a wide range of values from 0.0061% (i.e., only one pixel out of 128×128 pixels in one image) to 47.53% (i.e., nearly a half of pixels in an image). As a result, the dataset presents an imbalance between landslide and non-landslide pixels which causes challenges in the segmentation task. §.§ Task definition Using the development set of the Landslide4Sense <cit.> dataset as a basis, two tasks of landslide detection and landslide segmentation using deep neural network are proposed in this paper[The segmentation task was not part of the Landslide4Sense challenge <cit.>.]. We evaluate our proposed deep neural networksusing random train-test splitting, using 80% for training and 20% as holdout for testing. When the best configuration of the deep neural network is indicated, we evaluate the best network with the 5-fold cross-validation. The final evaluation scores are obtained using the average of scores from 5 folds. §.§ Evaluation metricsFollowing the guidelines of the Landslide4Sense challenge, we use the F1 score as main evaluation metric <cit.>. In addition, we report the mean Intersection over Union (mIoU) score, which is a crucial performance metric in the segmentation task <cit.>.§ PROPOSED U-NET BASELINE The baseline model for for landslide detection and segmentation comprises two main components: Online data augmentation and U-Net based network architecture. §.§ Online data augmentation For the baseline model, we apply two data augmentation methods, rotation and cutmix, to the image input of size 128×128×14.We first randomly rotate each image using an angle of 90, 180, or 270 degrees to generate a new image, referred to as the rotation. Subsequently, random landslide regions from 0 to 2 random landslide images are cut and mixed with the current processing image, referred to as the cutmix <cit.>. As these data augmentation methods are conducted on each batch of images during training, refer to as step as "online data augmentation". §.§ Proposed U-Net based baseline architecture The proposed baseline leverages a U-Net architecture (Table <ref>, Fig. <ref>). The U-Net baseline comprises three main blocks: downsample, upsample, and head. Both downsample and upsample blocks make use of the same double convolution layer. The double convolution layer comprises two single convolution layers, each of which contains one convolutional layer (Conv[3×3]), one Batch Normalization layer (BN) <cit.>, and one Leaky Rectified Linear Unit layer (LeakyReLU) <cit.>), as shown in Fig. <ref>.While the downsample block scales down the input images of 128×128×14 to 8×8×1024 by using the Max Pooling layer (MP[2×2]), the upsample block scales up the output of downsample block to 128×128×64 by applying UpSampling 2D. The head block, which uses one dropout layer, one convolutional layer (Conv[1×1]) and applies a Softmax function, helps to transform the output of upsample block to the image of 128×128, referred to as the predicted mask. The predicted mask is compared with the ground truth mask using Cross Entropy as loss function.We construct the U-Net baseline with the Tensorflow framework. The U-Net baseline is trained for 65 epochs on a an NVIDIA Titan RTX GPU with 24GB RAM. We use Adam optimization <cit.> for model training.§ IMPROVING THE U-NET BASELINE SYSTEM The improvement of the U-Net baseline focuses on three main aspects of a deep learning model: the loss function, the input quality and the network architecture. §.§ A combined loss functionWe tackle the issue of class imbalance between event pixels and non-event pixels by using Focal loss <cit.>. Additionally, we apply IoU loss <cit.> to further improve the mIoU score within the segmentation task. As a result, the final loss, referred to as the combined loss, is defined by combining Focal loss and IoU loss with equal weight.§.§ Input image quality enhancementFeature engineering and augmentation are important tuning knobs for improving model performance. We therefore supplement the 14 original bands from the Landslide4Sense development set with 12 additional bands (bands 15 to 26), using methods methods as detailed in Table <ref>.* Bands 15 to 17 are generated by applying RGB normalization on bands B2, B3 and B4.* Bands 18 to 21 represent remote sensing indices (NDVI, NDMI, NBR) and a grayscale image.* Bands 22 and 23 are generated by applying Gaussian and median filters with kernel size of [10×10].* Bands 24 and 25 are calculated from the image gradient (across length and width dimension).* Band 26 presents the result of using Canny edge detector.§.§ U-Net backbone architecture improvementWe propose three main improvements regarding the U-Net baseline architecture. First, we suggest that multiple kernel sizes and a residual based architecture is more effective to capture distinct features of feature maps rather than a conventional convolutional layer. We therefore propose an architecture of a residual-convolutional layer (Res-Conv) which is used to replace the double convolution layer in both the downsample and upsample blocks. Within the proposed residual-convolutional layer (Fig. <ref>), the input feature map 𝐗_1 is first learned by two convolutional layers with different kernels (e.g. Conv[2×2] and Conv[3×3]) before going through a BN layer, LeakyReLU layer and adding together to generate the feature map 𝐗_2. Then, the feature map 𝐗_2 goes through a convolutional layer (Conv[3×3]), BN layer, LeakyReLU layer to generate the feature map 𝐗_3. Finally, the feature map 𝐗_3 is added with the input feature map 𝐗_1 to create the final output of the proposed residual-convolutional sub-block.The second improvement is to apply an attention layer after every convolutional layer in both the downsample and upsample blocks of the proposed U-Net baseline. The attention weights generated by the proposed attention layer effectively support the neural network to focus on landslide regions on the feature maps in the network. We evaluate three types of attention schemes: SE <cit.> attention, CBAM <cit.> attention, and multi-head attention <cit.>. Both SE and CBAM are popular and widely used in literature. Following the line of Le et al. (2023) <cit.>, we propose an additional multi-head attention based layer (Pro-Att) as follows: Given an input feature map 𝐗 with a size of [W×H×C] where W, H, and C presents width, height, and channel dimensions, we reduce the size of feature map 𝐗 across three dimensions using both max and average pooling layers (Fig. <ref>).The multi-head attention scheme is then applied to each two-dimensional feature maps before multiplying with the original three-dimensional feature map 𝐗.The final improvement is inspired by applying an ensemble of multiple predicted masks with different resolutions to enhance the system performance. In particular, instead of using only one head block to generate one predicted mask of 128×128, we add two more head blocks to generate two other predicted masks: 256×256 and 64×64. As a result, the final predicted result is obtained from an average of three predicted output masks.As we generate three predicted masks, three loss functions are applied for the learning process.§ EXPERIMENTAL RESULTSWe first evaluate the effect of using the proposed combined loss function using the original images with with 14 bands only.Both Focal loss and IoU loss achieve better performance than the Cross Entropy loss (Table <ref>). The combination of Focal loss and IoU loss yields improvements of 1.22 in the F1-score and 1.13 in the mIoU score, respectively.As the proposed combined loss proved to be effective it was set as standard for further evaluation of the newly engineered features. To assess the added value of the new features, the enhanced image input was trained with U-Net baseline using the combined loss. The use of the additional 12 bands leads to further performance improvements by 0.81 in F1-score and 0.62 in mIoU score compared with the U-Net baseline and combined loss (Table <ref>).We now evaluate the proposed multiple resolution heads, the proposed residual-convolutional layer, and the proposed attention layer. To this end, we use the U-Net baseline, the full 23 band data and the combined loss. All proposed techniques improve the U-Net model performance further (Table <ref>). While the combination of multiple heads and the proposed attention layer achieves F1/mIoU scores of 71.45/63.05, the combination of multiple heads and the proposed residual-convolutional layer obtains the F1/mIoU scores of 72.07/63.45.Given the effectiveness of using the combined loss function, the enhanced 23 band data, multi-resolution heads, the proposed Res-Conv layer and attention layers, we eventually configure the best U-Net network architecture (Fig. <ref>). We evaluate this network with 5-fold cross validation and compare it to the Landslide4Sense challenge baseline as well as the proposed U-Net baseline. The best U-Net network achieves mIoU/F1 scores of 76.07/84.07, thereby outperforming the Landslide4Sense challenge baseline and the proposed U-Net baseline (Table <ref>). The best U-Net network architecture also performs the lower trainable parameters compared to the other networks.§ CONCLUSION We have presented a U-Net based deep neural network for landslide detection and segmentation from the remote sensing imagery. We consider and evaluate the effects of improvements of feature engineering, network architecture, and loss functions, and illustrate corresponding improvements in overall network performance. By conducting extensive experiments, we successfully developed an U-Net neural network which achieves an F1-score of 84.07 and an mIoU score of 76.07 on the benchmark Landslide4Sense development set. Our proposed system clearly outperforms the Landslide4Sense baseline by improving the F1-score by 6.88 and the and mIoU score by 7.43, respectively.§ ACKNOWLEDGMENTSThe research leading to this publication was partially carried out within the gAia project. The gAia project is funded by the KIRAS program of the Austrian Research Promotion Agency (FFG) and theFederal Ministry of Agriculture, Regions and Tourism (BMLRT) under grant no. FO999886369.-11cm IEEEbib | http://arxiv.org/abs/2312.16717v1 | {
"authors": [
"Cam Le",
"Lam Pham",
"Jasmin Lampert",
"Matthias Schlögl",
"Alexander Schindler"
],
"categories": [
"cs.CV",
"cs.LG",
"eess.IV"
],
"primary_category": "cs.CV",
"published": "20231227205655",
"title": "Landslide Detection and Segmentation Using Remote Sensing Images and Deep Neural Network"
} |
label1]Aditya Vasudevancor1 label1]Jorge Zorrilla Prieto label1,label2]Sergei Zorkaltsev label1]Maciej Haranczykcor2[cor1]Corresponding author's email: [email protected] [cor2]Corresponding author's email: [email protected][label1]organization=IMDEA Materials Institute, addressline=Eric Kandel 2, Tecnogetafe,city=Madrid, postcode=28906, country=Spain[label2]organization=Universidad Carlos III de Madrid, addressline=Av. Universidad 30,city=Leganes, Madrid, postcode=28911, country=SpainLocal geometrical features of a porous material such as the shape and size of a pore or the curvature of a solid ligamentdo often affect the macroscopic properties of the material, and their characterization is necessary to fully understand the structure-property relationships. In this contribution, we present an approach to automatically segment large porous structures into such local features. Our work takes inspiration from techniques available in Topological Data Analysis (TDA). In particular, using Morse theory, we generate Morse-Smale Complexes of our structures that segment the structure, and/or its porosity into individual features that can then be compared. We develop a toolwritten in C that is built on the topology toolkit (TTK) library, an open source platform for the topological analysis of scalar data, with which we can perform segmentation of these structures. Our tool takes a volumetric grid representation as an input, which can be generated from atomistic or mesh structure models and any function defined on such grid, e.g. the distance to the surface or the interaction energy with a probe. We demonstrate the applicability of the tool by two examples related with analysis of porosity in zeolite materials as well as analysis of ligaments in a porous metal structure. Specifically, by segmenting the pores in the structure we demonstrate some applications to zeolites such as assessing pore-similarity between structuresor evaluating the accessible volume to a target molecule such as methane that can be adsorbed to its surface. Moreover, once the Morse-Smale complexes are generated, we can construct graph representations of the void space, replacing the entire pore structure by a simply connected graph. Similarly, the same tool is used to segment and generate graphs representing the solid structure and we show how they can be used to correlatestructure and mechanical properties of the material. The code is published as open-source and can be accessed here : https://github.com/AMDatIMDEA/tda-segmentorhttps://github.com/AMDatIMDEA/tda-segmentor structure-property relationships pore segmentation Topological Data Analysis (TDA) Morse-Smale Complexes Reeb Graphs PROGRAM SUMMARY Program Title: CPC Library link to program files: (to be added by Technical Editor) Developer's repository link: https://github.com/AMDatIMDEA/tda-segmentorhttps://github.com/AMDatIMDEA/tda-segmentor Code Ocean capsule: (to be added by Technical Editor) Licensing provisions: BSD 3-clause Programming language: Nature of problem: Porous material properties such as gas adsorption in zeolites or MOF's, to macroscopic mechanical properties of gold nanostructures derive largely from the local geometric features of the pore or structure and understanding this relationship is quite important. Computational strategies have been extremely successful in this regard, and coupled with machine learning techniques can predict material properties trained on a small set of simulated data. However this requires robust descriptors to be generated that can be quite challenging. Solution method: We present here a method and tool based on topological data analysis (TDA) that can analyze data purely based on its shape. From a volumetric grid representation of the structure, such as the distance grid or the energy grid to a probe molecule, we generate Morse-Smale complexes of the structures that segment the structure/pore into distinct segments. These can then be used to assess pore/structure similarity, develop robust descriptors, correlate and predict better the pore/structure-property relationship of the porous material. § INTRODUCTIONPorous materials of various chemical compositions and pore size-scales are both commonly observed in nature and applied in technology, and have long attracted researcher's attention. For example, nanoporous materials such as zeolites, natural clays and metal organic frameworks (MOF) are finding applications in the industry as chemical catalysts <cit.>, membranes or as adsorbents <cit.>. Porous metal structures are investigated in the context of lightweight structural materials for energy-efficient transport <cit.>. Numerous others are investigated in the context of battery electrodes <cit.>, bio-materials <cit.>, geological deposits and others.Computational approaches have been playing a crucial role in understanding the structure-property relationships and aiding the design. For example, molecular simulations can be use to reliably predict adsorption and diffusion of guest species <cit.> . Similarly, in porous metals such as gold, simulations can shed light onto relationships between the topology of the structure and/or its pore network and the mechanical properties of the material.For instance, finite element simulations have shown that for two porous structures with same solid fractions but different pore topologies (gyroids vs spinodal-like)yield at different compressive stresses <cit.>. In addition through molecular dynamics simulations, compression of nanoporous structures with the same solid fraction and ligament diameter but different topology are found to have different mechanical responses <cit.>. Furthermore, in recent years a wide accessibility of high-throughput computing has allowed studies of larger sets of structures. Machine learning-based approaches have become to emerge, i.e. by training on a small set of simulated data, ML regressors can predict the material property just from the structure features (descriptors). For guest adsorption for example, porosity descriptors such as accessible surface area (ASA), largestcavity diameter (LCD) can readily generate feature vectors which can act as an input for the machine learning algorithms <cit.>. However such pore descriptors only work well at high pressures where the guest molecule adsorbs in the entire void space, but for low pressuresthey tend to be localized in the strongly binding regions of the material’s pore <cit.>. Hence there is a need to generate robustdescriptors that is also able to include extremely local pore morphology of the structures too. Topological Data Analysis (TDA) has already developed into a mature field that is capable of obtaining insight from large datasetsby looking at their `shape' and how it can reveal important information of the data. In particular, persistent homology <cit.> can explain how certain features of the data persist across multiple-scales. Such methods are already popular in network science <cit.> and have also been recently used for applications in material science. In <cit.>, persistent homology is used to formulate robust porosity descriptorsand they show that two structures with similar pore morphology have so-called similar persistent diagrams. In <cit.>, such persistent diagrams have been converted to persistent images which can then act as feature vectors for machine learning algorithms. In this work, we develop a high-throughput tool that borrows from these ideas inTopological Data Analysis. In particular, from Morse theory, which in algebraic topology is used to analyze the topology of a manifold by studying differentiable functions on that manifold, cell-complexes of the manifold can be formed, which can be used in decomposing the manifold into different unique regions. We present here a tool written in C that is built on the topology toolkit library (TTK) <cit.>, which have efficient algorithms for the topological analysis of scalar functions, and we build on these algorithms to analyze our data. The rest of the article is organized as follows: In section <ref>, we first present the TDA method to generate the segments. We discuss the different input fields that can be used for analysis, the methodology based on persistence used to filter noise and finally generate unique segments. In section <ref> we apply it to some example zeolites and nanoporous gold structures to generate segments of either the pore or the structure. Further, using the segmentation information, we also demonstrate that we can replace the entire structure by a much simpler graph representation, and finally in section <ref>, we discuss some applications from the generated segments related to guest adsorption and pore-similarity. We conclude with some remarks in section <ref>. § METHODSTopological Data Analysis (TDA) is premised on the idea that the shape of data sets contains relevant information. Herein, we first present some of the mathematical preliminaries of topological analysis that is based on persistent homology and we refer to <cit.> for an in-depth analysis of the theory. TDA has seen successful applications in a variety of fields of science such as combustion <cit.>, astrophysics <cit.> and materials science <cit.>. The following subsections describe our implementation and the input data. §.§ Mathematical overviewLet us consider a manifold ℳ and a piece wise linear scalar field f : ℳ→ℝ, where d is the dimension of the manifold ℳ.While the theoretical framework can be extended to any dimension d, we restrict ourselves here to d = 3, which is the three dimensional Cartesian space. The function f has values at the vertices ℳ^0 of ℳ and for higher dimension simplices, such as edges, faces or the volume within the faces, the function f is linearly interpolated. The first step is to evaluate the critical points of the scalar function f. For smooth functions, the critical points are simply the points where the gradient of thescalar field, ∇ f vanishes, while in the discrete setting, the critical points are evaluated slightly differently (see <cit.>). Once the critical points are evaluated,they are given indices (ℐ) with ℐ = 0 for minima, ℐ = 1 for 1-saddles, ℐ = 2 for 2-saddles and ℐ = 3 for maxima.Let us consider an example from the documentation of the TTK library where the scalar field f is a discrete wavy terrain as shown in Fig. <ref> (a) where there are clear hills and valleys with some noise. If the critical points for this surface is calculated as shown by the spheres in Fig. <ref>(b), critical points are also located in all the noisy features of the terrain. The first step in TDA is to look at the distribution of critical points of f through the so-called persistence curve, 𝒞(f), and the persistence diagram, 𝒟(f). Critical points can be arranged in a set of pairs, such that each critical point appears in only one pair (c_i, c_j) with f(c_i) < f(c_j) and ℐ(c_i) = ℐ(c_j) - 1 <cit.>. If one plots on the X-axis, f(c_i) and on the Y-axis a barwith ordinates f(c_i) and f(c_j), then we obtain the persistence diagram, 𝒟(f), as shown in Fig. <ref> (e) and we call p = f(c_j) - f(c_i) as the persistence. The noisy features of the terrain are less persistent and hencepopulate as bars very close to the diagonal. A persistence threshold can then be applied on this diagram (see Fig. <ref>(c)), to remove the noisy features, and evaluate the more significant features as shown by spheres in Fig. <ref> (f). Alternatively, the number of critical pairs (c_i, c_j) can also be plotted with the persistenceresulting in the persistence curve, 𝒞(f), as shown in Fig. <ref> (d). Critical pairs at a low value of persistence that correspond to the noisy features disappear quickly, and separate from the significant features through a horizontal plateau as shown by the red straight line in Fig. <ref>(d).Such a horizontal plateau is quite typical in analyzing scalar data as it clearly separates scales from smaller to larger features and can act as a good guide to decide the persistence threshold for applications. Once the noise is filtered, given a critical point p, its ascending (𝒜(f) (resp. descending 𝒟(f)) manifolds is defined as a set of points, belonging to integral lines (that are lines in the direction of ∇ f) whose origin (resp. destination) is p. In other words, ascending manifolds are segments where the tangent moves towards the local maxima, while descending manifolds move towards the local minima as shown in Figs. <ref>(g),(h). The transversal intersection of the ascending and descending manifolds results in a unique Morse-Smale segmentation ℳ(f) as shown in Fig. <ref>(f). This methodology to uniquely segment a scalar function into ascending and descending manifolds forms the main basis for our ideas on nanoporous materials.§.§ ImplementationOurtool that can be used to apply the aforementioned ideas on nanoporous structures such as, zeolites, MOF's or metallic nano-pillars. The code is open-source, developed in C++and is published on https://github.com/AMDatIMDEA/tda-segmentorGitHub. The algorithms for topological analysis are taken fromthe Topology Toolkit (TTK) <cit.>, which is a software platform that is easily accessible to the end users due to a tight integration with Paraview. Moreover, the library comes with a variety of bindings inandfor fast prototypingor even through directwithout anydependency. In our code, we take advantage of thedependency as it results in much cleaner, efficient and better code.Fig. <ref> highlights the workflow oftool. Starting from the porous structure, the first step is to generate a continuous scalar3-manifold that can be in the form of distance or energy grids. This acts as the scalar data input for the tool to generate segments of void or solid structures. §.§ Inputs: Generation of scalar functionsFor our particular application to nanoporous structures, we generate two physically motivated volumetric scalar fields, i.e. the distance grids and the energy grids that are defined as follows. More generally, other similarly defined material datasets could be analyzed as well, e.g. electron densities, wave functions, guest molecule distributions etc.§.§.§ Distance gridsAs various chemical properties such as adsorption of a molecule directly depends on the pore morphology, the distance of a point to the material internal surface provides useful connections between structure and property. Additionally, we can use the sign of the distance function to extend the same algorithmic framework to study not only porosity but also the corresponding material structures. Specifically, we define the distance grid to be to be the least distance to the surface of the structure and negative within the structure while positive in the pore (see Fig. <ref>(a) for the zeolite FAU and Fig. <ref>(b) for gold nano-pillar structures). For zeolites,the distance grids can be evaluated using <cit.> that accepts common crystal structure input formats such asand . §.§.§ Energy gridsWhen a guest molecule enters the porous materials, it faces repulsive and attractive forces resulting in a complex energy landscape that dictates, together with other parameters such as temperature, how the molecule diffuses and/or adsorbs within the pore network. The guest molecule binding sites correspond to the minima of the energy landscape. Interaction energy grids (to be referred to as simply energy grids)can be evaluated from an existing open source repository <cit.> that may use fairly standard Lennard-Jones potential based onthe universal force field. An example for an energy grid is shown in Fig. <ref> (b), for a methane (CH4) molecule into a LTA zeolite framework. § TDA-SEGMENTOR: USAGE AND EXAMPLES Theis written with a command-line invocation and different functionalities of the code are implemented as modules, and detailed documentation is available on the GitHub repository. We explain these modules by applying them to a couple of well-known zeolites such as FAU, MFI or LTA that have many potential applications as catalysts in the petroleum industry. We first generatethe distance grid (f), which is shown in Fig. <ref> (a) for the zeolite FAU and is done similarly for MFI. Once the distance grid is generated, the persistence curve 𝒞(f) is computed for both FAU and MFI as shown in Fig. <ref>(a),(d) respectively.We clearly see that for both FAU and MFI,there is a clear plateau which separates smaller features from the larger features. Figs. <ref> (b),(e) then plots the persistence diagram (𝒟(f) ) where noisy features are less persistent and populate close to the diagonal. Bychoosing a persistence threshold that corresponds to the plateau of the persistence curve 𝒞(f), the noisy critical points are removed resulting in persistent diagrams as shown in Figs. <ref>(c),(f) respectively.Once filtration is done by discarding critical points below a certain persistent threshold, Morse-Smale segmentation of the distance grid is performed generating uniquely distinct segmented regions of the distance grid. As void regions are regions with positive distances that increase from thesurface of the atoms, to analyze the void space we look at the ascending manifolds thresholded by a distance ofr = 1.6 A^o, which corresponds to the void space accessible to a methane molecule. These segments for FAU and MFI are shown in Fig. <ref>(a) and (b) respectively, colored by their segment ID. Also shown as a black sphere at the centerof the segment, is the local maxima of the distance grid.We now perform a similar analysis using energy grids and compare the results obtained from the segmentation of the distance grid. While the maximaand minima of the distance can correspond to the maximum pore and center of the atom respectively, local methane-probe interaction energy minima correspond to binding sitesof the molecule in the pore. Fig. <ref> (a) presents the segmentation of the accessible pore for a methane molecule using the distance grid while Fig. <ref> (b) visualizes the segmentation on an energy contour obtained from the energy grid. In (a), the void space is simply segmented into two regions, a large central pore, and an isolated pore in the corner.The energy grids however give different information and segment the energy grid around the binding sites as shown by the cyan spheres in Fig. <ref>(b). Next, we present another example applied to a gold nano-pillar structure as shown in Fig. <ref>(a). The topology of the pores such as the ligament size and diameter contribute significantly in the macroscopic properties of the material such as the elastic modulus. From the nano-pillar structure,we can generate distance grids and by segmenting decreasing or increasing values of the distance grids, we obtain the segmentation of the structure or poreas shown in Figs. <ref>(d), (e) respectively. Finally, once the Morse-Smale complexes are calculated, this information can be used to generate a graph structure for either the void space or the pore respectively.For the distance grids, maxima are located at the center of each segment anda 2-saddle is located at the border of neighboring segments, while for energy grids, minima are at the center of each segments and we will have 1-saddles at the border of each segment. By simply connecting the maxima/minima to their respective saddles, we can generate the graph representations for the void accessible to a guest molecule or the solid structure as shown in Fig. <ref>. § APPLICATION TO NANOPOROUS STRUCTURES After the segmentation of the structure, we now present some applicationson how these segments can be used to develop new ideas in the design andanalysis of these structures.§.§ Inaccessible pore volume In section <ref>, we showed some examples of the segmentation of the void space for a methane molecule of radius r = 1.6 A^o (see Figs. <ref> and <ref>(a)). If we look closer at the segments that are generated for FAU for example, some of the segments are completely isolated and disconnected. These are regions that areinaccessible to a guest molecule which is important property to know in the design of zeolites. We remind that once the Morse-Smale complexes are generated, each Ascending Manifold will have a maxima within the segment, bordered by 2-saddles between two segments. By simply looking at the 2-saddles, we can identify these inaccessible pockets as such regions will not have any 2-saddles connected to any other segment. In our tool, if one calls the accessible voidspace module, segment information along with the connectivity of the segments are saved which can be used to analyze these regions (see the documentation on the GitHub repository for more details). §.§ Binding sites for a guest moleculeFrom segmentation of energy grids as shown in Fig. <ref>(a), we can identify the locations that are local energy minima which can act as potential binding sites for a guest molecule such as methane. Moreover, we can also save information of the saddle points that lie on the border of two segments which can give us the energy barrier that is required to translate a guest molecule from one binding site to another. §.§ Structure-property relationship for Au nano-pillars We finally discuss a structural example where the gold nano-pillar shown in Fig. <ref> is compressed by 15%. We generate the segments ofthe gold structure and compare the distance function histograms of each segment. Segments that have similar histograms are grouped together and are expected to have similar properties. Fig. <ref> for example shows four segments grouped into two groups that have similar distance histograms (see Fig. <ref> (b)). These groups have similar stress histograms (see Fig. <ref> (c)) and similar stress profiles ( see Fig. <ref>(d)) thus correlating structure with the geometry of the structure. § CONCLUSIONSIn this article, we have presented a tool that offers a new line of analysis of porous structures taking inspiration from the growing field of topological data analysis.The presented tool takes as input either the distance or energy grids of the structure and can segment both the pore and the structure. Next, from the methods available in persistent homology such as plotting the persistence curves and persistent diagrams, we filter the noise from the scalar input data to segment only the most significant features. Finally we show a number of illustrative examples where the generated segments are analyzed to understand better correlations between the geometrical features and property.Furthermore, larger repositories of structures can to be analyzed with our code facilitating tasks such as database screening, structure diversity analysis and training of machine learning models.§ ACKNOWLEDGEMENTSWe acknowledge the financial support from M-ERA.NET's PORMETALOMICS project supported by MCIN/AEI/10.13039/501100011033 and the European Union's NextGenerationEU/PRTR funds. § DATA AVAILABILITYThe code is published as open-source and can be accessed here : https://github.com/AMDatIMDEA/tda-segmentorhttps://github.com/AMDatIMDEA/tda-segmentor elsarticle-num | http://arxiv.org/abs/2312.16558v1 | {
"authors": [
"Aditya Vasudevan",
"Jorge Zorrilla Prieto",
"Sergei Zorkaltsev",
"Maciej Haranczyk"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231227124653",
"title": "tda-segmentor: A tool to extract and analyze local structure and porosity features in porous materials"
} |
Bayesian Sensor Placement for Multi-source Localization of Pathogens in Wastewater Networks Kalvik Jakkala^1 and Srinivas Akella^1 This document is the results of the researchproject funded in part by a legislative allocation to UNC Charlotte under the CARES Act. ^1The authors are with the Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, USA. Email:{kjakkala, sakella}@charlotte.edu Corresponding author: Kalvik Jakkala January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTIONThe method of optimal distillation profiles introduced in <cit.> has been used to define meson operators with large overlaps onto energy eigenstates of interest, significantly improving on standard distillation <cit.>, and to study glueball-charmonium mixing <cit.> and static potentials <cit.>. The goal of this work is to use it to study the charmonium spectrum and its mixing with glueballs and light mesons in N_f = 3 + 1 gauge ensembles. The significant suppression of excited-state contamination at small time separations yielded by the optimal profiles is expected to improve the signal of quark-disconnected contributions, which are necessary to study iso-scalar meson operators yet heavily affected by a signal-to-noise problem. Two different pion masses ( m_π/m_η_c≈ 3, 7) are used, which change the decay channels for glueballs and two-particle states. A first step to study these decay dynamics is also done in this work by mapping out the energy spectrum based on single-particle operators.§ METHODSTwo ensembles of N_f = 3 + 1 clover improved Wilson fermions, Lüscher-Weisz gauge action, open boundary conditions in time at the SU(3) light flavor symmetric point are used in this work <cit.>. The first one, denoted as B, has size 48^3 × 144, β = 3.43, lattice spacing a≈ 0.043 fm and m_π≈ 420 MeV. The second one, denoted as A1-heavy, has size 32^3 × 96, β = 3.24, m_π/m_η_c = 3 and lattice spacing a ≈ 0.068 fm determined from the spin-singlet splitting m_h_c - m_η_c. The latter ensemble is particularly useful for meson-glueball mixing since the two-pion decay threshold is significantly raised while the former ensemble is particularly useful for mapping the charmonium spectrum thanks to the light quark masses being tuned to their sum in nature <cit.>. The observable of interest for these ensembles is the temporal correlation matrixC(t)= [ C_cc(t) C_cl(t) C_cg(t); C_lc(t) C_ll(t) C_lg(t); C_gc(t) C_gl(t) C_gg(t) ].whose entries involve different types of operators. The upper left 2× 2 block contains only iso-singlet mesonic operators: C_q_1 q_2(t) is the correlation ⟨q̅_1(t) Γ q_1(t) ·q̅_2(0) Γ̃ q_2(0) ⟩, where Γ̃ = γ_0 Γ^†γ_0 and q_1, q_2 ∈{ c,l }. The off-diagonal terms C_cl(t), C_lc(t) of this block contain information about flavor-mixing between charmonium and light mesons, with the case of mixing between η_c and η^' being of particular interest for this work. The remaining elements C_qg(t), C_gq(t) outside of the 2× 2 block contain information about mixing of the charmonium and light iso-singlets with gluonic operators, which in this work are built from the eigenvalues of the 3D gauge covariant lattice Laplacian operator <cit.>. Each of the 9 entries of the correlation matrix in Eq (<ref>) are by themselves a matrix since one can use multiple mesonic and gluonic operators with the same quantum numbers. For purely mesonic correlations, i.e C_q_1q_2(t), the entries of these matrices are given byC_q_1 q_2(t)_mn = δ_q_1 q_2⟨ - Tr( Φ_m[t] τ_q_1[t,0] Φ̅_n[0] τ_q_2[0,t]) ⟩_gauge+ √(N_q_1)√(N_q_2)⟨Tr( Φ_m[t] τ_q_1[t,t]) Tr( Φ̅_n[0] τ_q_2[0,0] )⟩_gauge.where N_q denotes the degeneracy of the flavors (N_c = 1, N_l = 3) and the indices m, n go from 0 to N_B -1, the number of different meson operators chosen in this work to be the same for charm and light flavors. The vacuum expectation value contributions involving ⟨Tr( Φ_m[t] τ_q_1[t,t])⟩_gauge are explicitly subtracted only in the 0^++ symmetry channel. The modulated elementals Φ_m[t] for a fixed choice of Γ have entriesΦ_m[t]_ij αβ = f_m( λ_i[t], λ_j[t]) v_i[t]^†Γ_αβ v_j[t]for a given choice of meson profiles f_m(λ_i[t], λ_j[t] ), m = 0,...,N_B-1. Φ̅_m[t] is defined in the same manner but using Γ̃ = γ_0 Γ^†γ_0 and τ_q[t_1,t_2] = V[t_1]^† D_q^-1 V[t_2] is the perambulator for the quark flavor q. The matrix V[t] has 4× N_v columns corresponding to the N_v Laplacian eigenvectors placed into each of the 4 possible spin indices, making it block diagonal in spin. The value of N_v used for the charm and light perambulators can be different, which can be preferable due to the inversions being more expensive for the light quarks. For purely gluonic correlations and those involving gluonic-mesonic mixing, the entries of the matrices are given byC_qg(t)_mb = √(N_q)⟨Tr( Φ_m[t] τ_q[t,t] ) G^R_b(0) ⟩_gaugeC_gg(t)_ab = ⟨ G_a^R(t) G_b^R(0)⟩_gauge,where G^R_a(t), a = 0,...,N_G - 1, are a set of N_G glueball operators chosen to transform according to the same irrep R as the meson q̅Γ q. For the scalar channel the glueball operators are built from the sum of the N_v Laplacian eigenvalues at a given time weighted by different profiles, similar to the mesonic elementals. Other glueball operators built from spatial Wilson loops as described in <cit.> including several loop shapes and levels of APE smearing <cit.> were tried but the ones from the eigenvalues yielded the best signal. Different number of mesonic and gluonic operators are used in this work and therefore C_gg(t) does not have the same size as C_qq(t) and C_gq(t) is a rectangular matrix.The energies of the different states of interest are extracted by solving a generalized eigenvalue problem (GEVP) <cit.> given byC̃(t) u_n(t,t_0)= ρ_n(t,t_0) C̃(t_0) u_n(t,t_0), where the matrix C̃(t) is obtained by projecting the correlation matrix C(t) onto the singular vectors with largest singular values of C(t_0). This keeps the contributions of orthogonal operators with good overlap onto the low-lying states and makes the problem better conditioned against statistical noise <cit.>. The effective masses of the n-th state are then extracted asam_eff^n = ln( ρ_n(t,t_0)/ρ_n(t+a,t_0)).To systematically study the effects of the different entries of the correlation matrix in Eq. (<ref>), one can solve the GEVP starting not with the full correlation matrix but only with sub-blocks of it. For example, taking only C_cc(t) allows to study the charmonium spectrum but neglects possible mixing with light hadrons or gluonic operators. Taking the upper left 2× 2 sub-block in Eq. (<ref>) allows to study the charmonium and light spectrum including their mixing, yet it neglects mixing with gluonic operators. Different combinations are treated in this work. When using only mesonic operators, the corresponding optimal profile for the n-th state is given by f̃^(n)( λ_i[t], λ_j[t] )= ∑_k u_n^(k)(t_1,t_0) f_k( λ_i[t], λ_j[t] ),where u_n^(k)(t_1,t_0) denotes the k-th entry of the vector u_n(t_1,t_0) and t_1 is a value of time chosen such that excited-state contamination is sufficiently suppressed.§ SPECTRUM RESULTSThe charmonium spectrum in ensemble B was measured omitting the quark-disconnected contributions to the correlation functions as a first test of the optimal profiles in a close-to-physical setup using N_v = 325 <cit.>. The lightest particle in this case is the η_c with quantum numbers J^PC = 0^-+, accessible with Γ = γ_5. Figure <ref> shows the effective masses of the ground state using Γ = γ_5 both with standard distillation and with the optimal profile from the GEVP. The suppression of excited-state contamination when using the optimal profile is clear; the mass plateau starts earlier which effectively more than doubles the plateau interval in this case. This not only yields a more reliable estimate of the plateau average but also means the signal at relatively small time separations is already dominated by the ground state. This is particularly important to extract a useful signal when quark-disconnected contributions are taken into account. A similar improvement was obtained for Γ operators corresponding to other lattice irreps and the resulting mass estimates for all states below the DD̅ threshold are shown in Fig. <ref>, where the continuum quantum numbers J^PC are used as labels. The calculated mass of the η_c was subtracted from all other masses to eliminate the effect of the charm quark mistuning. The gray rectangles correspond to these relative masses calculated in this work while the blue rectangles correspond to their value in nature <cit.>. There is good agreement between the results in this work and the experimental counterparts even with the omission of quark-disconnected contributions, indicating these are most probably suppressed. In particular, the hyperfine splitting in this work (111.8(1.4) MeV) lies very close to the experimental 113.0(5) MeV. It has a level of statistical uncertainty which is competitive with other state-of-the-art lattice calculations which do not use distillation, e.g 118.6(1.1) MeV <cit.> and 116.2(1.1) MeV <cit.>. The DD threshold for this ensemble is shown in red, using the mass of the D-meson measured in <cit.>, while the experimental result of the D_0 D_0 one is shown in blue. The difference between these two values indicates the effects of not having the light quark masses at their physical values.The optimal profiles for the ground state of Γ operators based only on Dirac matrices which come from the same GEVP as the reported masses are shown in Fig. <ref>. None of them resemble a constant, a feature already observed when the optimals profiles were first studied <cit.>. Higher Laplacian eigenvalues are significantly suppressed, so fewer eigenvectors could be used to extract these ground states. However, since excited states are also of interest it is worth to keep all the calculated eigenvectors. As presented in <cit.>, it is possible to define a spatial profile for spin-singlet operators involving γ_5 in their Γ operator and the ground and first excited state spatial profiles for Γ = γ_5 are shown in Fig. <ref>. The expected S-wave behavior is observed in terms of radial structure and presence of nodes. The spatial size of the lattice provides a good resolution for the profiles and these seem to be well contained in the volume, indicating finite-volume effects are under control for these two states.In ensemble A1-heavy both the charmonium and light meson spectrum were mapped using N_v = 200 for both quark masses. Fig. <ref> shows the effective masses for different particles of interest together with the calculated mass plateau averages. To compare the charmonium values with experiment, the mass of the η_c is subtracted from the masses of the J/Ψ and χ_c0 which eliminates the effects of the mistuning of the charm quark mass. The resulting masses show good agreement with experimental values <cit.>. The channels involving only quark-connected contributions in the correlation functions display the clearest signal, e.g the pion, while the ones involving quark-disconnected contributions are affected by the signal-to-noise problem at very early times, e.g the η^'. The effective masses coming from the purely gluonic operators for the 0^++ channel are also displayed and seem to approach a value slightly below 2 GeV, close to the two-pion threshold. Nonetheless the error becomes too large to make a definitive statement and the inclusion of a two-pion operator is expected to help in this regard. Cases of particular interest are the η^' and η_c, whose effective masses are more clearly displayed in Fig. <ref>. Their effective masses were calculated with and without taking into account the mixing between charmonium and light meson operators, therefore any differences between the points will be due to these dynamics. Since both these particles are in the same symmetry channel, they correspond to different energy eigenstates of the same J^PC. The ground state is the η^' and the η_c is higher up the ladder of excitations, some of which are two-particle states. The mass of the η_c from only connected contributions is included for reference, assumed to be not far from the true iso-scalar state. Since the masses from the GEVP using disconnected contributions and mixing start very close to this reference point, it seems the charmonium operators have a large overlap with a state close to this reference. Nonetheless, the trend in the effective masses to go down before the error becomes too large indicates a non-zero overlap with lower states. Both with and without mixing, the η^' masses remain consistent with each other and a reduction of excited-state contamination is seen in the case with mixing. Fig. <ref> shows the profiles for the first three states of the 0^-+ channel in charmonium and light mesons, where again the non-trivial structure in distillation space is observed. While there are similarities between the charm and light profiles in terms of number of nodes, the most prominent feature is the earlier suppression of eigenvalues in the light profiles compared to the charm ones. This indicates fewer eigenvectors are required for the study of the light spectrum, which can represent a significant reduction of computational costs since the light inversions are considerably more expensive than the charm ones.§ CONCLUSIONSThis work showed that the use of optimal distillation profiles yields a significant improvement over standard distillation in the study of charmonium in a setup with a physical charm quark and three degenerate light quarks at the SU(3) flavor symmetric point with two different pion masses. The charmonium spectrum with the lighter pion is in good agreement with experiment and the statistical uncertainty is compatible with other state-of-the-art lattice calculations. The spectrum of light mesons at two different pion masses was also mapped via this same method and the mixing between charm and light iso-scalar mesons was studied in the ensemble with a heavy pion. Small effects of charm-light flavor mixing were observed for the case of the η^' and η_c states. The optimal profiles for the light mesons are narrower than the charmonium ones, indicating that fewer eigenvectors are required to access the energy eigenstates of interest. This represents a significant reduction of computational costs, since the light inversions are more expensive than the charm ones. Some signal for a scalar glueball slightly below 2 GeV was observed, close to the two-pion decay threshold in this setup, yet better gluonic and two-pion operators are required to perform a systematic study of these decay dynamics, which is a work in progress. Acknowledgement. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de) under GCS/LS project ID pn29se as well as computingtime and storage on the GCS Supercomputer JUWELS at Jülich Supercomputing Centre (JSC) under GCS/NIC project ID HWU35. The authors also gratefully acknowledge the scientific support and HPCresources provided by the Erlangen National High Performance ComputingCenter (NHR@FAU) of the Friedrich-Alexander-UniversitätErlangen-Nürnberg (FAU) under the NHR project k103bf. M.P. was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement 824093 (STRONG-2020). R.H. is supported by the programme "Netzwerke 2021", an initiative of the Ministry of Culture and Science of the State of Northrhine Westphalia, in the NRW-FAIR network, funding code NW21-024-A. The work is supported by the German Research Foundation (DFG) research unit FOR5269 "Future methods for studying confined gluons in QCD". JHEP | http://arxiv.org/abs/2312.16740v1 | {
"authors": [
"Juan Andrés Urrea-Niño",
"Jacob Finkenrath",
"Roman Höllwieser",
"Francesco Knechtli",
"Tomasz Korzec",
"Michael Peardon"
],
"categories": [
"hep-lat"
],
"primary_category": "hep-lat",
"published": "20231227230306",
"title": "Charmonium spectroscopy with optimal distillation profiles"
} |
Event-based Shape from Polarization with Spiking Neural Networks Peng Kang ^1,*, Srutarshi Banerjee^2, Henry Chopp^3, Aggelos Katsaggelos^3, and Oliver Cossairt^1 January 14, 2024 ===================================================================================================== Recently, prompt-tuning with pre-trained language models (PLMs) has demonstrated the significantly enhancing ability of relation extraction (RE) tasks.However, in low-resource scenarios, where the available training data is scarce, previous prompt-based methods may still perform poorly for prompt-based representation learning due to a superficial understanding of the relation.To this end, we highlight the importance of learning high-quality relation representation in low-resource scenarios for RE, and propose a novel prompt-based relation representation method, named MVRE (Multi-View Relation Extraction), to better leverage the capacity of PLMs to improve the performance of RE within the low-resource prompt-tuning paradigm. Specifically, MVRE decouples each relation into different perspectives to encompass multi-view relation representations for maximizing the likelihood during relation inference. Furthermore, we also design a Global-Local loss and a Dynamic-Initialization method for better alignment of the multi-view relation-representing virtual words, containing the semantics of relation labels during the optimization learning process and initialization. Extensive experiments on three benchmark datasets show that our method can achieve state-of-the-art in low-resource settings.§ INTRODUCTION Relation Extraction (RE) aims to extract the relation between two entities <cit.> from an unstructured text <cit.>. Given the significance of inter-entity relations within textual information, the practice of relation extraction finds extensive utility across various downstream tasks, including dialogue systems <cit.>, information retrieval <cit.>, information extraction <cit.>, and question answering <cit.>.Following the emergence of the paradigm involving pre-trained models and fine-tuning for downstream tasks <cit.>, many recent relation extraction studies have embraced the utilization of large languagemodels <cit.>. In these works, the language models are integrated with classification heads and fine-tuned specifically for relation extraction tasks, resulting in promising results. However, the effective training of additional classification heads becomes challenging in situations where task-specific data is scarce. This challenge arises from the disparity between pre-training tasks, such as masked language modeling, and the subsequent fine-tuning tasks encompassing classification and regression. This divergence hampers the seamless adaptation of pre-trained language models (PLMs) to downstream tasks.Recently, prompt tuning has emerged as a promising direction for facilitating few-shot learning, which effectively bridges the gap between the pre-training and the downstream task <cit.>. Conceptually, prompt-tuning involves template and verbalizer engineering, aiming to discover optimal templates and answer spaces. For example, as shown in Figure <ref> (a), given a sentence “Steve Jobs, co-founder of Apple" for relation extraction, the text will first be enveloped with relation-specific templates, namely transforming the original relation extraction task into a relation-oriented cloze-style task. Subsequently, the PLM will predict words in the vocabulary to fill in the [MASK] position, and these predicted words are finally mapped to corresponding labels through a verbalizer.In this example, the filled word “[relation_1]" (e.g., “founded") can be linked to the label “org:founded_by" through the verbalizer. However, for complex relation representations, such as “per: country_of_birth" and “org: city_of_headquarters," obtaining suitable vocabulary labels is much more challenging. To address this issue, previous work <cit.> applies logic rules to decompose complex relations into descriptions related to the subject and object entity types. Some worksconstruct virtual words for each relation (a trainable“[relation_1]") to substitute the corresponding answer space of the complex relation <cit.>. This paradigm focuses on optimizing the relation representation space and demands PLMs to learn representations for words that are not present in the vocabulary. However, in extremely low-resource scenarios, such as one-shot RE, building robust relation representations with this paradigm is difficult, thus leading to a performance drop. To mitigate the above issue, in this paper,we introduce Multi-view Relation Extraction (MVRE), which improves low-resource prompt-based relation representations with a multi-view decoupling framework. As illustrated in Figure <ref> (b), considering that relations may contain multiple dimensions of information, for instance, “org:founded_by" may entail details about organizations, people's names, time, the action of founding, and so on. According to theoretical analysis, being limited to a single vector representation, the model may face the upper boundary of representation capacity and fail to construct robust representations in low-resource scenarios. Therefore, we propose to optimize the latent space by decoupling it into a joint optimization of multi-view relation representations, thereby maximizing the likelihood during relation inference. By sampling a greater number of relation representations, as denoted “[relation_1-i]" in Figure <ref> (b)), we promote the learned latent space to include more kinds of information about the corresponding relation. In detail, we achieve this decoupling process by disassembling the virtual words into multiple components and predicting these components through successive [MASK] tokens. Furthermore, we introduce a Global-Local loss and Dynamic Initialization approach to optimize the process of relation representations by constraining semantic information of relations. We evaluate MVRE on three relation extraction datasets. Experimental results demonstrate that our method significantly outperforms previous approaches. To sum up, our main contributions are as follows: * To the best of our knowledge, this paper presents the first attempt to improve low-resource prompt-based relation representations with multi-view decoupling learning. In this way, the PLM can be comprehensively utilized for generating robust relation representations from limited data. * To optimize the learning process of multi-view relation representations, we introduce the Global-Local Loss and Dynamic Initialization to impose semantic constraints between virtual relation words.* We conduct extensive experiments on three datasets and our proposed MVRE can achieve state-of-the-art performance in low-resource scenarios. § BACKGROUND AND RELATED WORK§.§ Prompt-Tuning for REInspired by the “in context learning" proposed in GPT-3 <cit.>, the approach of stimulating model knowledge through a few prompts has recently attracted increasing attention. In text classification tasks, significant performance gains can be achieved by designing a tailored prompt for a specific task, particularly in few-shot scenarios <cit.>. In order to alleviate the labor-intensive process of manual prompt creation, there has been extensive exploration into automatic searches for discrete prompts <cit.> and continuous prompts <cit.>.For RE with prompt-tuning, a template function can be defined in the following format: T(x) = x : w_s : [MASK] : w_o, where “:" signifies the operation of concatenation. By employing this template function, the instance x is modified to incorporate the entity pair (w_s, w_o), resulting in the formation of x_prompt=T(x). In this process, x_prompt is the corresponding input of model M with a [MASK] token in it. Here, Y refers to the label words set, and 𝒱 donates the relation set within the prompt-tuning framework. A verbalizer v is a mapping function v: Y ⟶𝒱, establishing a connection between the relation set and the label word set, where v(y) means label words corresponding to label y. The probability distribution over the relation set is calculated as:p(y|x)=p_M([MASK]=v(y)|T(x)) In this way, the RE problem can be transferred into a masked language modeling problem by filling the [MASK] token in the input.However, for relation extraction, the complexity and diversity of relations pose challenges in employing these methods to discover suitable templates and answer spaces. <cit.> propose prompt-tuning methods for RE by applying logic rules to construct hierarchical prompts. <cit.> make prompts for each relation and converts RE into a generative summarization problem. These works translate the prediction of a relation into predicting a specific sentence, which to some extent addresses the complexity of relations. However, summarizing the intricate information of a relation using these words remains challenging.§.§ Virtual Relation Word<cit.> introduce virtual relation words and leverage prompt-tuning for RE by injecting semantics of relations and entity types. <cit.> propose retrieval-enhanced prompt-tuning by incorporating retrieval of representations obtained through prompt-tuning. These studies devise virtual words for each relation in prompt-tuning, circumventing the need to search for complex answer spaces <cit.>.The corresponding verbalizer v^* for this approach function as v^*: Y ⟶𝒱^*, where 𝒱^*={𝒱, 𝒱^Y},|Y|=|𝒱^Y|,v^*(y)∈𝒱^Y,y∈ Y. The 𝒱^Y corresponds to virtual relational words, representing the set of words created for each relation. The acquisition of this virtual word for a relation is equivalent to obtaining a latent space representation for that relation. As the relation virtual words do not exist in the pre-trained model's vocabulary, ensuring robust representations often requires a sufficient amount of data or semantic constraints to the prompt-based instance representation <cit.>.Given an instance x, the prompt-based instance representation h^x can be computed by leveraging the output embedding of the “[MASK]" token of the last layer of the underlying PLM:h^x = M(T(x))_[MASK]The prompt-based instance representation h^x can capture the relation corresponding to the instance x, and ultimately, through the “MLM head", derive the classification probabilities for the respective virtual relation word <cit.>. Most of these approaches confine a complex relation to a single prompt-based vector, which limits the learning of relation latent space in low-resource scenarios.§ METHOD§.§ Preliminaries Formally, a RE dataset can be denoted as D = {X, Y}, where X is the set of examples and Y is the set of relation labels. For each example x={w_1,w_2,w_s,...,w_o,...,w_n}, the goal of RE is to predict the relation y ∈ Y between subject entity w_s and object entity w_o (since an entity may have multiple tokens, we simply utilize w_s and w_o to represent all entities briefly).§.§.§ Previous prompt-tuning in Standard Scenario In the prompt-based instance learning for relations, it is assumed that for each class y_i, we learn a corresponding latent space representation H_y_i such that F^-1(y_i) = H_y_i, where F denotes the function mapping between labels and representations. In the case of a standard scenario, where all available data can be used, the model minimizes the following loss function:𝔼_x∼𝒳[-log p(y|x)] = -1/N∑_i=1^N log p(y_i,H_y_i|x_i)where N represents the total data volume across all classes. In this process, focusing solely on a specific relation y_e, the learned latent space representation Ĥ^standard_y_e for class y_e satisfies F(h^x^e_i)=y_e, where 1≤ i ≤#y_e and(x^e_i, y_e) ∈ (𝒳,𝒴). Here, #y_e represents the number of instances in the data with the label y_e. The process of obtaining Ĥ^standard_y_e is akin to optimizing the following expression:min_θ∑_(x^e_i,y_e)∈ (X,Y)sim(H_y_e,F^-1(y_e,θ))where “sim" represents the degree of similarity between the latent space representations. However, in low-resource scenarios, the value of #y_e can constrain the optimization effectiveness of Eq <ref>. §.§ Multi-view Decoupling LearningTherefore, we assume that in the process of learning the complex relation latent space H_y_i, it is feasible to decompose this space into multiple perspectives and learn from various viewpoints. Consequently, we consider the learning process for single data pair (x_i,y_i) as follows:p(y_i,H_y_i|x_i)=∑_h p(y_i,h|x_i) =∑_h p(y_i|x_i,h)p(h|x_i)=𝔼_h∼ p(h|x_i)p(y_i|x_i,h)where h represents a perspective in which the relation y_i is decomposed, we transform the learning of relations into the process of learning each relation's various perspectives. Ultimately, we merge the information from multiple perspectives to optimize the relation inference process.Similar to Eq <ref>, when there is only one pair of data for a given relation, the learning of its latent space is as follows:min_θ∑_(x^e,y_e)∈ (𝒳,𝒴), y_e^j∈ y_esim(H_y_e,F^-1(y_e^j,θ)) In this process, the learned latent space representation Ĥ^1-shot_y_e for class y_e satisfies F(h^x^e_j)=y_e, where 1≤ j ≤ m and (x^e, y_e) ∈ (𝒳,𝒴). Here, m represents the number of decomposed perspectives for the relation y_e.§.§.§ Sampling of Relation Latent SpaceUnder normal circumstances, the latent space learned in a low-resource setting tends to be inferior compared to the standard scenario, resembling sim(Ĥ^1-shot_y_e,H_y_e) ≥ sim(Ĥ^standard_y_e,H_y_e). Hence, as can be seen in Figure <ref> (a), our objective is for the low-resource acquired latent space to closely resemble that learned in the standard scenario, asE(Ĥ^1-shot_y_e) ∼E(Ĥ^standard_y_e). Combining Eq <ref> and Eq <ref>, the representation set {h^x^e_j|1≤ j ≤ m} we acquire needs to resemble the representation set {h^x^e_i|1≤ i ≤#y_e} obtained under standard conditions.This highlights the necessity of sampling a substantial number of h^x^e_j(m ≥ 1) instances with similar distribution to ensure alignment of the obtained relation latent space with that in standard scenarios. The value of m will be discussed in the experimental section. According to the Eq <ref>, h is determined by the parameters of model M, the structure of template T, and the expression “[MASK]=v(y_i)":p(y_i|x_i,h^x_i) = p(y_i|x_i,M(T(x_i))_[MASK])=p_M([MASK]=v(y_i)|T(x_i)) To ensure a consistent interpretation of h^x_i obtained from single data pair, while simultaneously covering various perspectives of a relation, we sample h^x_i based on the expression “[MASK] = v(y_i)". Specifically, we expand the token “[MASK]" into multiple contiguous tokens within the template, each “[MASK]" corresponds to as follows:T(x)=x:[sub]:[MASK]_{1...m}:[obj]the sampling method for h^x_i_j is as follows, h^x_i_j = M(T(x))_[MASK]_j. It's important to note that a relation in text can be represented by a continuous segment of text. Therefore, this approach has the potential to capture multi-view representations of a relation.Based on our sampling method for latent space representation, we derive the probability distribution of y_i as follows:p(y_i|x_i,h^x_i_j)=p_M([MASK]_j=v_j(y_i)|T(x_i))Due to the challenge of finding suitable words in the vocabulary to match different perspectives of a relation, we introduce m new multi-view virtual relation words, denoted as v_j(y), for each relation y_i. Combining Eq <ref>, the final loss function ℒ_MVDL(x_i,y_i) that the model needs to minimize is as follows:∑_j=1^m[-log( p(h_j^x_i|x_i)p_M([MASK]_j=v_j(y_i)|T(x_i))] Here, we employ a matrix W_h to learn the posterior probability of h_j^x_i, the formula is as follows p(h_j^x_i|x_i)=σ(W_h^T h_j^x_i)/∑_k=1^m σ(W_h^T h_k^x_i), where σ represents the sigmoid function. When considering all the data, the loss function is given by:ℒ_MVDL=∑_(x_i,y_i)∈ (X,Y)ℒ_MVDL(x_i,y_i) §.§.§ Global-Local LossThe contrastive learning methods to enhance representation learning have been employed in many previous works <cit.>. To encourage better alignment of multi-view virtual relation words v_j(y) with diverse semantic meanings, we introduce the Global-Local Loss(referred to as “GL") to optimize the learning process of multi-view relation virtual words. The Local Loss encourages virtual words representing the same relation to focus on similar information, while the Global Loss ensures that virtual words representing different relations emphasize distinct aspects. Their expressions are as follows: ℒ_Local=-1/|Y|m^2∑_r∈ Y[∑_i,j∈[1,m] sim(emb_r^i,emb_r^j)]ℒ_Global=1/|Y|^2m∑_i=1^m [∑_ru,rv ∈ℛsim(emb_ru^i,emb_rv^i)]where sim(x,y)=cos(x/||x||,y/||y||), emb_r^i denotes the embedding of the virtual word for relation v_i(r).Finally, the loss function of MVRE is as follows:ℒ_MVRE=ℒ_MVDL+α * ℒ_Local+β * ℒ_Globalwhere α and β are hyperparameters. The framework of MVRE is illustrated in Figure <ref> (b).§.§.§ Dynamic Initialization The virtual word for a relation also involves learning a new word that does not exist in the original vocabulary.Therefore, efficient initialization is crucial for achieving desirable results in this process. However, in MVRE, it is essential to have meaningful initialization methods that consider the actual positions of each virtual word in the text. We introduce Dynamic Initialization (referred to as “DI"), which leverages the PLM's cloze-style capability to identify appropriate initialization tokens for relation-representing virtual words. Specifically, we first create a manual template for each relation and insert a prompt after it (The manual template for each relation can be found in the appendix C). Then, we employ the model to find the token with the highest probability, which serves as the initialization token for the respective virtual word. To enhance the construction of relation information, we incorporate the entity information corresponding to the label itself. This knowledge is not involved in the model's training process and is similar to prompts, as it leverages the inherent abilities of the model, thus preserving the characteristics of low-resource scenarios. To mitigate the potential generation of irrelevant tokens during dynamic initialization, particularly with larger m values, we merge the static and dynamic initialization techniques. Inspired by <cit.>, we introduce Static Initialization (referred to as “SI"), where words for initialization are derived from the labels corresponding to each relation. We integrate the two methods by averaging the tokens' embedding obtained from static and dynamic initialization. § EXPERIMENTS§.§ Datasets For comprehensive experiments, we conduct experiments on three RE datasets: SemEval 2010 Task 8 (SemEval) <cit.>, TACRED <cit.>, and TACRED-Revisit (TACREV) <cit.>. Here we briefly describe them below. The detailed statistics are provided in Table <ref>.SemEval is a traditional dataset in relation extraction that does not provide entity types. It covers 9 relations with two directions and one special relation “Other".TACRED is one large-scale sentence-level relation extraction dataset drawn from the yearly TACKBP4 challenge, which contains 41 common relation types and a special “no relation" type.TACREV builds on the original TACRED dataset. They find out and correct the errors in the original development set and test set of TACRED, while the training set is left intact.TACREV and TACRED share the same set of relation types. §.§ Compared MethodsTo evaluate our proposed MVRE, we compare with the following methods: (1) FINE-TUNING employs a conventional fine-tuning approach for PLMs to relation extraction. (2) GDPN_ET utilizes the multi-view graph for relation extraction <cit.> (3) PTR <cit.> propose prompt-tuning methods for RE by applying logic rules to partition relations into sub-prompts; (4) KnowPrompt <cit.> utilize virtual relation word to prompt-tuning; (5) RetrieveRE <cit.> employ retrieval to enhance prompt-tuning.§.§ Implementation Details We utilize Roberta-large for all experiments to make a fair comparison. For test metrics, we use micro F_1 scores of RE as the primary metric to evaluate models, considering that F_1 scores can assess the overall performance of precision and recall.More detailed settings can be found in the Appendix A. Low-resource Setting. we adopt the same setting as RetrievalRE <cit.> and perform experiments using 1-, 5-, and 16-shot scenarios to evaluate the performance of our approach in extremely low-resource situations. To avoid randomness, we employ a fixed set of seeds to randomly sample data five times and record the average performance and variance. During the sampling process, we select k instances for each relation label from the original training sets to compose the few-shot training sets.Standard Setting. In the standard setting, we leverage full trainsets to conduct experiments and compare with previous prompt-tuning methods, including PTR, KnowPrompt, and RetrievalRE.§.§ Low-Resource ResultsWe present our results on low-resource settings in Table <ref>. Notably, across all datasets, our MVRE consistently outperforms all previous prompt-tuning models. Particularly remarkable is the substantial improvement in the 1-shot scenario, with gains of 63.9%, 8.7%, and 9.6% over RetrievalRE in SemEval, TACRED, and TACREV respectively. When k is set to 5 or 16, the magnitude of improvement decreases. In the TACRED and TACREV datasets, when k is set to 16, there's a slight decrease compared to the retrieval-enhanced RetrievalRE. However, overall, the performance remains better than KnowPrompt, a fellow one-stage prompt-tuning method similar to ours. Similar to previous works <cit.>, the comparison of performance between fine-tuning-based methods(FINE-TUNING, GDPN_ET) and MVRE demonstrates the superiority of prompt-based methods in low-resource settings.It's noteworthy that our method doesn't exhibit the same significant improvements in TACRED and TACREV as observed in SemEval. Our speculation is attributed to two reasons: (1) In TACRED and TACREV, the high proportion of “other" relations (78% in TACRED/V, 17% in SemEval) can make it challenging to categorize relations as “other" in the low-resource scenario. (2) There are more similar relations than SemEval, such as “org:city_of_headquarters" and “org:stateorprovince_of_headquarters", making it more difficult to distinguish them in low-resource scenarios.§.§ Ablation StudyTo prove the effects of the components of MVRE, including Global-Local Loss(GL), Dynamic Initialization(DI), and Static Initialization(SI), we conduct the ablation study on SemEval and present the results in Table <ref>. Additionally, we present the results under the standard setting in Table <ref>. §.§.§ Standard ResultsUnder the full data scenario, MVRE and KnowPrompt yield equivalent results, indicating that our approach remains applicable and does not compromise model performance when enough data is available.§.§.§ Global-Local LossAs observed in Table <ref>, the incorporation of the Global-Local Loss(GL) consistently yields improvements across various scenarios, resulting in an enhancement of the relation F1 score by 0.5, 0.4, and 0.5 in the 5-shot, 16-shot, and standard settings, respectively. This phenomenon demonstrates that constraining the semantics of virtual relation words' embedding through a comparative method can optimize the representation of multi-perspective relations. §.§.§ The Initialization of Virtual Relation WordsWe also conduct an ablation study to validate the effectiveness of the initialization of relation virtual words. Previous studies have revealed that achieving satisfactory relation representations with random initialization is challenging <cit.>. Hence, to ensure model performance, it is essential to use either Static Initialization(SI) or Dynamic Initialization(DI) during the experiment. When both are employed simultaneously, their corresponding tokens' embedding is averaged to integrate these two methods. Table <ref> demonstrates that adopting Dynamic Initialization leads to a significant enhancement in model performance compared to Static Initialization. Furthermore, combining both initialization methods also yields substantial improvements. §.§.§ Effect of m Number of [MASK]Due to the introduction of noise when inserting “[MASK]" and further, the efficiency of decoupling learning presents significant challenges. Therefore, simply increasing the number of “[MASK]" tokens cannot enhance performance in low-resource scenarios. As shown in Figure <ref>, we conduct experiments to investigate the impact of varying quantities of “[MASK]" tokens on relation extraction effectiveness, aiming to identify the optimal value for m. The performance of the model shows a trend of initially increasing and then decreasing as the value of m increases. Specifically, the value of m reaches its peak within the range of [3, 5]. As m increases from 1 to 3, there is a sudden improvement in the model's performance, indicating that the decoupling of relation latent space into multiple perspectives contributes significantly to the construction of relation representations. However, when m ≥ 5, the model's performance exhibits a gradual decline. This trend suggests that with a higher number of consecutive “[MASK]" tokens, the prompt-based instance representation obtained by the model tends to contain more noise, thereby adversely affecting the overall model performance.§.§ Case study of Dynamic Initialization We illustrate the feasibility of multiple “[MASK]" tokens and the effectiveness of our Dynamic Initialization through a case study, presented in Table <ref>.Specifically, for a sentence x, we wrap it into T(x) and input T(x) into the model (RoBERTa-large). At each “[MASK]" position, we obtain the token with the highest probability from the model. This token represents the word that the model identifies as best representing the relation based on the given sentence. During the Dynamic Initialization process, we utilize the embedding of the token with the highest probability to initialize the corresponding position of the virtual relation word.Given the existence of many relations with reversed subject and object roles in the dataset, it is challenging to model them accurately without confusion. Therefore, in Table <ref>, we illustrate our method's unique treatment of relations that are mutually passive and active by interchanging the subject and object orders(we controlled the active and passive voice of relations by swapping the order of [sub] and [obj]). It can be observed that, by increasing the number of [MASK] tokens, RoBERTa-large in the zero-shot scenario effectively captures both active (“was founded in" and “reflected on") and passive (“the founding of" and “been reflected in") voice forms for these two relations. However, when there is only one [MASK] token, the generated tokens are largely unrelated to these relations. This indicates that increasing the number of [MASK] tokens enables the PLM to utilize a broader range of words to depict a complex relation, potentially enhancing the PLM's capacity to represent relations. §.§ Effectiveness of Low-resource Decoupling LearningWe conduct experiments to demonstrate the effectiveness of decoupling learning in MVRE, which can be formalized as the following equation in our methods: E(Ĥ^1-shot_y_e) ∼E(Ĥ^standard_y_e). To evaluate the effectiveness of our proposed method, we compare the performance in scenarios with relatively low and enough resources. To be specific, we compare MVRE with one [MASK] against MVRE with m [MASK]. One-[MASK] MVRE is tested in k-shot settings, while m-[MASK] MVRE is tested in (k/m)-shot settings, ensuring consistent relation representation sampling. Additionally, we test one-[MASK] MVRE in (k/m)-shot scenarios for result comparison. The results are as shown in Figure <ref>. We employ the proportion of model result similarity to represent the overall similarity of obtained relation representations, as represented by the formula:sim(H-model1, H-model2)=F1-score-model1/F1-score-model2. Experimental results show that, with an equal number of h, the similarity of relation representations obtained under low-resource scenarios surpasses 90% when compared to higher-resource scenarios. This indicates a 20% improvement over the one-[MASK] approach. This demonstrates that decoupling relation representations into multi-view perspectives can significantly enhance relation representation capabilities in low-resource scenarios. § CONCLUSIONIn this paper, we present MVRE for relation extraction, which improves low-resource prompt-based relation representations with multi-view decoupling. Meanwhile, we propose the Global-Local Loss and Dynamic Initialization techniques to constrain the semantics of virtual relation words, optimizing the learning process of relation representations. Experimental results demonstrate that our method significantly outperforms existing state-of-the-art prompt-tuning approaches in low-resource settings.§ ACKNOWLEDGMENTSThis work was supported in part by the National Natural Science Foundation of China under Grant No. 62276110, No. 62172039 and in part by the fund of Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL). The authors would also like to thank the anonymous reviewers for their comments on improving the quality of this paper.§ A. HYPER-PARAMETERS AND REIMPLEMENTIONThis section details the training and inference process of our models. We train and inference MVRE with PyTorch and Huggingface Transformers on one NVIDIA 4090. All optimizations are performed with the AdamW optimizer. The random seed for data sampling is set to 1 through 5. Due to the utilization of the Dynamic Initialization method, which enhances the initial representation of virtual words' embeddings, we adopt distinct parameters for α and β in the Global-Local Loss when using Dynamic Initialization.§.§ A.1 Standard SettingThe hyperparameters of MVRE in the standard-setting experiments are as follows: * learning rate: 5e-6* batch-size: 8* max seq length: 256 (for TACRED, TACREV as 512)* gradient accumulation steps: 1* number of epochs: 16* α: 2 (for using Dynamic Initialization as 1.2)* β: 0.1 (for using Dynamic Initialization as 0.7) §.§ A.2 Low-Resource SettingThe hyperparameters of MVRE in the low-resource setting experiments are as follows: * learning rate: 3e-5* batch-size: 8* max seq length: 256 (for TACRED, TACREV as 512)* gradient accumulation steps: 1* number of epochs: 40* α: 2 (for using Dynamic Initialization as 1.2)* β: 0.1 (for using Dynamic Initialization as 0.7) § B. VISUALIZATION OF MULTI-VIEW CAPTURE MVRE is capable of decoupling each complex relation into multiple virtual words, each (i.e., a view) of which denotes a probability distribution over multiple aspects of a complex relation. Then, MVRE jointly optimizes the representations of such multi-views for maximizing the likelihood during inference. In Table <ref>, only the top-1 result is displayed, with function words selected for their broad semantic coverage. To provide a clearer illustration of the concept of multi-view decoupling, we have designed a special experiment to explore the correlation between different virtual words and various views, such as “time", “people", “place", and “action". We present a visualization as Figure <ref>. In detail, we compute the cosine similarity between each virtual word in MVRE and all non-special words[Special words: such as [CLS] and other special tokens in the vocabulary, including virtual words] in other vocabulary lists. Then, we calculate the cosine similarity with the top 10 most similar words and words related to “time", “people", “place", and “action". For example, regarding "time," we can compute the similarity with words such as “time," “when," and other temporal descriptors. Finally, the product of these two similarities serves as a measure of relevance between the virtual word and the four specified perspectives: time, people, place, and action. § C. MANUAL-CONSTRUCTED TEMPLATES FOR DYNAMIC INITIALIZATIONIn this subsection, we present the manually constructed templates used for Dynamic Initialization in SemEval(Table <ref>) and TACRED (also applicable to TACREV). During Dynamic Initialization, we utilize RoBERTa-large to predict the word with the highest probability for each [MASK], and then use the embedding of this word to initialize the virtual words corresponding to the respective relations. In the table, m indicates the number of m [MASK]. | http://arxiv.org/abs/2312.17267v1 | {
"authors": [
"Chenghao Fan",
"Wei Wei",
"Xiaoye Qu",
"Zhenyi Lu",
"Wenfeng Xie",
"Yu Cheng",
"Dangyang Chen"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231226141616",
"title": "Improving Low-resource Prompt-based Relation Representation with Multi-view Decoupling Learning"
} |
Autonomous Docking Method via Non-linear Model Predictive ControlRoni Permana Saputra Research Center for Smart MechatronicsNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] ORCID: https://orcid.org/0000-0001-6989-8830https://orcid.org/0000-0001-6989-8830Eko Joni Pristianto Research Center for TelecommunicationNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] Midriem Mirdanies Research Center for Smart MechatronicsNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] Dayat Kurniawan Research Center for TelecommunicationNational Research and Innovation AgencyBandung, IndonesiaEmail: [email protected] 14, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================This paper presents a proposed method of autonomous control for docking tasks of a single-seat personal mobility vehicle.We proposed a non-linear model predictive control (NMPC) based visual servoing to achieves the desired autonomous docking task.The proposed method is implemented on a four-wheel electric wheelchair platform, with two independent rear driving wheels and two front castor wheels.The NMPC-based visual servoing technique leverages the information extracted from a visual sensor as a real-time feedback for the NMPC to control the motion of the vehicle achieving the desired autonomous docking task.To evaluate the performance of the proposed controller method, a number of experiments both in simulation and in the actual setting.The controller performance is then evaluated based on the controller design requirement.The simulation results on autonomous docking experiments show that the proposed controller has been successfully achieve the desired controller design requirement to generate realtime trajectory for the vehicle performing autonomous docking tasks in several different scenarios. model predictive control, autonomous vehicle, autonomous docking, visual servoing, personal mobility vehicle § INTRODUCTIONThe use of environmentally friendly modes of transportation, especially vehicles with power electric-based drives are currently starting to be actively encouraged.One type of electric vehicle that gaining popularity is the personal mobility vehicles (PMVs). This vehicle type consists of one or more wheels which driven by electric motor that travels around 2 m/s—very close to normal human walking speed. This vehicle is commonly used as a mode of transportation for short-distance trips in urban environments or in a specific and limited area.This type of vehicle provides various advantages from an environmental standpoint to socio-economic benefits.Over the past few decades, technological advancements in control, sensing, and artificial intelligence, particularly those applied to transportation, have encouraged the development of autonomous vehicle technology. The effect of this technology is expected to benefits us through the improvement of transportation safety, increasing driving efficiency and transportation convenience, reducing congestion, and other various benefits.One of the important features required for an autonomous vehicle is the capability of the vehicle to navigate and dock itself autonomously to a charging dock station.The vehicles can be recharged by planning and arranging effectively tomaximizeitsworkingefficiency.This paper adresses the autonomous docking problem for an autonomous personal mobility vehicle.We proposed a non-linear model predictive control (NMPC) based visual servoing to achieves the desired autonomous docking task.The NMPC-based visual servoing technique leverages the information extracted from a visual sensor as a real-time feedback for the NMPC to control the motion of the vehicle achieving the desired autonomous docking task.In forthcoming development, the proposed approach is expected to be fully implemented in a four-wheeled personal mobility vehicle. This vehicle consists of two independent rear drive wheels and two front castor wheels. Additionally, the vehicle is equipped with an RGB-D camera sensor, which provides visual servoing input as seen in Fig. 1. § RELATED WORKA number of studies have been conducted in the field of optimal control methods due to their demonstrated success in various applications.The methods are commonly utilized, particularly in situations where the dynamics of the controlled systems are complex or when there are numerous operating constraints that need to be achieved <cit.>.Model predictive control (MPC) has become increasingly popular and is now widely utilized across various applications <cit.>.MPC achieves the desired system behavior by utilizing a system model to forecast future states over a finite time horizon. It then optimizes control decisions based on a specified constrained cost function <cit.>.A key advantage of MPC is its ability to incorporate multiple constraints into the controller formulation. This enables the controller to effectively handle complex problems, including intricate desired behavior, environmental constraints, and hardware limitations.Numerous studies have utilized different forms of MPC for controlling non-holonomic mobile robots. These studies have explored various applications, including point stabilization, path and trajectory tracking, and collision avoidance <cit.>.Neunert et al. in <cit.> introduced a framework in their work that addresses the challenge of real-time nonlinear model predictive control (MPC) for mobile robots. Their approach efficiently solves both trajectory optimization and tracking problems in a unified manner.The framework presents an iterative optimal control algorithm known as Sequential Linear Quadratic (SLQ) to address the nonlinear Model Predictive Control (MPC) problem.In their study, Li et al. suggest employing a primal-dual neural network to address the Quadratic Programming (QP) problem within a finite receding horizon. This approach aims to enable trajectory control of a mobile robot system <cit.>.The proposed approach aims to ensure convergence of the formulated constrained quadratic programming (QP) cost function to the precise optimal values. It also demonstrates effective performance on a practical mobile robot system.Hirose et al. also introduce a work that integrates neural networks into the model predictive control (MPC) framework <cit.>.The authors in <cit.> employs a deep neural network to acquire the MPC policy, which differs from the approach taken in <cit.>.With the progression of technology, there is a corresponding enhancement in the capabilities and processing speed of camera sensors and computers. The progression of technology has resulted in the development of advanced image-processing algorithms capable of extracting and interpreting characteristics within unprocessed visual data obtained from camera sensors. One of the techniques employed is marker detection and posture estimation with the ArUco marker.The term "ArUco marker" denotes a quadrilateral marker that has a wide black border and an inside binary grid, which serves to indicate its distinct identification (id) as proposed by Garrido-Jurado et al in <cit.>.The Aruco marker has been shown to be capable of producing accurate pose estimation <cit.> and is effective in detecting several markers simultaneously <cit.>.ArUco markers have been utilized in many studies, such as the work conducted by Miranda et al., where they employed ArUco markers during the landing phase of an Autonomous Navigation System for a Delivery Drone <cit.>. Furthermore, Volden et al. have successfully incorporated ArUco markers into the autonomous docking system of unmanned surface vehicles (USVs) <cit.>. § PROBLEM FORMULATIONIn this section, we derived the kinematic model of the vehicle platform used as the predictive model of the controller. §.§ Vehicle Kinematic model SEATER is a non-holonomicwheeled vehicle with differential drive configuration (see Fig. <ref>).To model the kinematic of SEATER, we use simple unicycle model, in which the state vector of the vehicle is given by:x^veh(t)= [x^veh(t);y^veh(t); ϕ^veh (t) ]The first two (spatial) components of the statex = [x, y, ϕ]^Trepresent the position of the vehicle in the plane while the angle ϕ corresponds to the heading of the vehicle.The vehicle is controlled with input:u(t)= [ v(t); ω(t) ] where v and ω are forward velocity and angular velocity of the vehicle, respectively. The vehicle dynamics are derived as: 𝐱̇^veh(t) = 𝐟(𝐱^veh(t), 𝐮(t) ) = [ v(t)cosϕ^veh(t); v(t)sinϕ^veh(t);ω(t) ], or, in discrete time: 𝐱^veh_|k+1 = 𝐟(𝐱^veh_|k,𝐮_|k) =𝐱^veh_|k_ + Δ t [ v_|kcosϕ^veh_|k; v_|ksinϕ^veh_|k;ω_|k ]_. §.§ Autonomous Docking as A Nonlinear Model Predictive Control Problem The autonomous docking problem — illustrated in Fig. <ref> — can be formulated as an optimal control problem, where the high-level objective is to minimise the state deviation from the vehicle's current pose, 𝐱^veh, to the vehicle's target docking pose, 𝐱^doc: J = ||𝐱^veh - 𝐱^doc||^2To ensure safety during the autonomous docking,the vehicle needs to avoid colliding with all possible obstacles (i.e. prohibited area in Fig. 3) in all directions, so that:𝐱^veh_|k∈X^free where X^free is the set by all possible vehicle states in which the vehicle is free from collision. The autonomous docking problem can therefore be written as a model predictive control problem as follows: 𝐮min J_N(𝐱_0,𝐮) = ∑_k=0^N-1𝐱^veh_|k-𝐱^doc_𝐐^2 + 𝐮_|k-𝐮^t_𝐑^2 𝐱^veh_|k+1=𝐟(𝐱^veh_|k,𝐮_|k),∀ k ∈ [0,N-1] 𝐱^veh_|0=𝐱_0 𝐮_|k ∈ U,∀ k ∈ [0,N-1] 𝐱^veh_|k ∈ X^free,∀ k ∈ [0,N]where 𝐱^veh_|k-𝐱^doc_𝐐^2 and 𝐮_|k-𝐮^t_𝐑^2 are the functions of the state deviation and the control effort, respectively.The expression 𝐀_𝐁^2≡𝐀^T𝐁𝐀.The matrices 𝐐, 𝐑, and 𝐏 are positive definite symmetric weighting-matrices of the appropriate dimensions. The weighting matrices are acquired through the heuristic process ofhyperparameters tuning.§.§ Realtime Control Feedback via Position Based Visual Servoing (PBVS)In order to provide real time feedback to the NMPC controller—i.e., the vehicle's current pose, 𝐱^veh, to the vehicle's target docking pose, 𝐱^doc—position based visual servoing (PBVS) is utilised in this work.In PBVS method, the visual feedback or image measurements are used to determine the target pose of the target with respect to the camera frame which have a fixed transformation to the vehicle frame.The error between the current and the target pose is defined in the task space of the vehicle.Hence, the error is a function of pose parameters: 𝐞(t) = 𝐱(t)-𝐱^doc§ PROPOSED METHOD §.§ System Architecture We propose a marker based visual servoing with nonlinear model predictive control.Fig. <ref> illustrates the system architecture forthe proposed NMPC-based visual servoing control for autonomous docking task.In general, the system used in this work consists of two main components, (i) Marker Detector and Pose Estimation, and (ii) Nonlinear Model Predictive Controller. §.§ Marker Detector and Pose Estimation In this study, we utilise an Intel RealSense D455 camerato detect an ArUco marker and determine its estimated position relative to the camera frame. This position provides a visual reference for the autonomous docking process.The application program discussed in this paper was developed using the Python programming language and the OpenCV library. Additionally, we conducted a camera calibration procedure to compensate for lens errors. This procedure involved using a chessboard image captured from multiple angles using the Intel RealSense D455 camera.In order to validate the position estimation of the ArUco marker, we utilized a marker detection program to detect the ArUco marker, as shown in Fig. <ref>.The marker was previously printed and its actual length was measured in centimeters. Subsequently, the camera's distance from the marker can be determined by calculating its xyz coordinates in centimeters. In the event that multiple markers are detected, the program will prioritize the marker with the lowest identification number. The xyz coordinates will be leveraged as a reference for guiding the vehicle toward the marker.§.§ Nonlinear Model Predictive AlgorithmThe discrete NMPC controller here is defined as an OCP with a finite control horizon N, which evaluates the vehicle state 𝐱^veh at every sampling instant k.Then, the optimal control, 𝐮^*, for the vehicle is produced at every time step by solving the OCP with respect to the decomposed-objective function at this respective stage, J_N^s, which satisfies the optimal value, V_N^s(𝐱̂) (i.e. minimising the output of the cost function, J_N^s, s.t. constraints). The first element of the produced optimal control trajectory, 𝐮^*_0, is then applied to the system. Algorithm 1 summarises the proposed approach.§.§ Controller Design Requirement and Objective FunctionThe autonomous docking task There are two main design requirements that the designed controller needs to achieve the autonomous docking task succesfully, position accuracy and heading accuracy.We introduce two specific cost functions to the desired behaviours that correspond to the controller design requirements: * Distance-to-point objective: Given the current vehicle position 𝐩^veh=[x^veh,y^veh]^⊺ and the target vehicle docking position 𝐩^doc=[x^doc,y^doc]^⊺, minimising the distance from the current vehicle position to the target position will drive the vehicle to approach final target position for docking operation. The distance between these two points in 2D space can be defined as the Euclidean norm:F_1= 𝐩^veh-𝐩^doc^2* Heading objective: Given the current vehicle orientation ϕ^veh and the target docking orientation ϕ^doc, minimising the difference between these orientations Δ ensures that the vehicle could perform the docking operation in the feasible heading: F_2 = Δ(ϕ^veh,ϕ^doc)=π- || ϕ^veh-ϕ^doc | -π | ,where:ϕ∈ [ -π, π].so that it takes into account the periodicity of angles (i.e. angle wrapping). §.§ Solving the Optimal Control Problem (OCP) via Non-Linear Programming using CasADiIn this study, we useAPI <cit.> to compute the real-time optimisation problem in order to solve the finite OCP problem in the proposed NMPC. Thesolver is used by the API to compute the OCP.The OCP problem is formulated as a standard non-linear programming (NLP). We recommend the reader to <cit.> and <cit.> for a more detailed formulation.§ EXPERIMENTAL SETUPIn order to evaluate the efficacy of the proposed method, a series of experimental scenarios were set up and executed on the SEATER platform. Initially, the performance of the proposed marker-based pose estimation is assessed.Furthermore, an assessment is conducted to evaluate the performance of the proposed docking controller in accordance to the designated requirements for controller design.To evaluate the controller performance, we designed several experiment scenarios to emphasize each of the design requirement points. * Position Deviation (Shifting): The controller is required to generate realtime trajectory for the vehicle from initial pose to achieve the desired docking pose within the position deviation tolerance. * Heading Deviation (Tilting): The controller is required to generate realtime trajectory for the vehicle from initial pose to achieve the desired docking heading angle within the heading angle deviation tolerance.§ RESULTS AND DISCUSSION §.§ Marker-based Visual Pose Estimation Software has been develop to detect the ArUco marker and calculate pose estimation which produces xyz coordinates in cm as seen in Fig. <ref>.Experiments have been carried out to detect ArUco markers as shown in Fig. <ref>.a which shows that the system can detect well even though the distance between the camera and marker is sufficient Far. The result of the pose estimation calculation can be seen in Fig. <ref>.b, which is shown in x coordinates, y coordinates and z coordinates or the distance between the marker and the camera in cm. We investigated the correlation between marker size and the Intel RealSense D455 camera's minimum and maximum detection distance.This assessment could be used to identify the optimal marker size for the autonomous docking application.Table <ref> provides the findings of the evaluation of the relationship between marker size and minimum and maximum detection distance.Additionally, the relationship between detection distance and detection inaccuracy was investigated as well.Fig <ref> shows the evaluation results of the correlation between detection distance and detection error for three different marker sizes.Based on the conducted experiments, it has been shown that using marker size dimensions of 100 x 100 mm results in a more consistent detection for the purpose of pose estimation. Specifically, the detection accuracy is found to be higher when the marker is located at distances ranging from 350 mm to 1250 mm, as this range yields a reduced margin of error in the estimated pose.Hence, the obtained results may be used as a valuable point of reference for determining the appropriate marker size and optimal operational distance in the context of the proposed autonomous docking technique.§.§ Autonomous Docking Simulation ExperimentsFig. <ref> shows the simulation experiment results of autonomous docking via non-linear model predictive control in four different scenarios. It can be found from the results that the proposed non-linear model predictive controller has been successfully achieve the desired controller design requirement to generate realtime trajectory for the vehicle from four different initial poses to achieve the desired docking position and docking heading angle within the deviation tolerances.Table <ref> shows the the position and heading angle deviations from the simulation experiment results of autonomous docking.It can be found from the results that the average position deviation is 0.002132 meter and the the average heading angle deviation is 0.000381^∘.§ CONCLUSION This paper presents a proposed method of autonomous control for docking tasks of a single-seat personal mobility vehicle.We proposed a non-linear model predictive control (NMPC) based visual servoing to achieves the desired autonomous docking task.The proposed method is designed to be implemented on a four-wheel electric wheelchair platform, with two independent rear driving wheels and two front castor wheels.A series of simulation experiments has been systematically conducted to evaluate the performance of the proposed controller method to achieved the controller design requirement.The experimental outcomes obtained from various scenarios in the simulations provide evidence that the proposed controller technique is capable of achieving the controller design requirements for accomplishing the autonomous docking task.Further study on this issue will be focused on integrating the proposed controller into the physical platform and conducting rigorous experiments and evaluations in a real-world setting. § ACKNOWLEDGMENTThis work is fully funded by the Research Organisation for Electronics and Informatics, National Research and Innovation Agency (Badan Riset dan Inovasi Nasional - BRIN).IEEEtran | http://arxiv.org/abs/2312.16629v1 | {
"authors": [
"Roni Permana Saputra",
"Midriem Mirdanies",
"Eko Joni Pristianto",
"Dayat Kurniawan"
],
"categories": [
"cs.RO",
"cs.SY",
"eess.SY"
],
"primary_category": "cs.RO",
"published": "20231227162718",
"title": "Autonomous Docking Method via Non-linear Model Predictive Control"
} |
[ Bela Gipp January 14, 2024 ==================== This article examines the implicit regularization effect of Stochastic Gradient Descent (SGD). We consider the case of SGD without replacement, the variant typically used to optimize large-scale neural networks. We analyze this algorithm in a more realistic regime than typically considered in theoretical works on SGD, as, e.g., we allow the product of the learning rate and Hessian to be O(1).Our core theoretical result is that optimizing with SGD without replacement is locally equivalent to making an additional step on a novel regularizer. This implies that the trajectory of SGD without replacement diverges from both noise-injected GD and SGD with replacement (in which batches are sampled i.i.d.). Indeed, the two SGDs travel flat regions of the loss landscape in distinct directions and at different speeds. In expectation, SGD without replacement may escape saddles significantly faster and present a smaller variance. Moreover, we find that SGD implicitly regularizes the trace of the noise covariance in the eigendirections of small and negative Hessian eigenvalues. This coincides with penalizing a weighted trace of the Fisher Matrix and the Hessian on several vision tasks, thus encouraging sparsity in the spectrum of the Hessian of the loss in line with empirical observations from prior work. We also propose an explanation for why SGD does not train at the edge of stability (as opposed to GD).toc § INTRODUCTION§.§ The Problem and the BackgroundThis article examines the implicit regularization effect of Stochastic Gradient Descent (SGD) without replacement, also known as random reshuffling. This is the particular variant of SGD most commonly used in practice to optimize large-scale neural networks. To put this into context, recall that gradient descent (GD) and its variants are a generic set of algorithms for minimizing an empirical loss L, which depends on a set of parameters θ and a dataset D of size n. Specifically, the update to the parameters θ_i after the first i steps of optimization is as follows: θ_i+1 = θ_i - η/b∑_z ∈ B_i+1∇_θ L(θ_i, z), i=0,1,…,where the i-th batch B_i is a subset of the training dataset D of size b and η is the step size (or learning rate). Full-batch gradient descent corresponds to the case when B_i = D is the entire dataset, whereas mini-batch SGD corresponds to smaller values of b. In practice, smaller values of b are both computationally faster and often lead to better performance <cit.>. The gain in speed is almost immediate since loss gradients at each step must only be computed on the current batch (see <ref>). The nature of the improvement in model quality from smaller values of b, however, is less obvious. The differences between full-batch GD and mini-batch SGD have been extensively studied both empirically, e.g., <cit.> and theoretically, e.g., <cit.>. However, virtually all the well-known theoretical work, except <cit.>, treats only the case of mini-batch SGD with replacement, in which the batches B_i are sampled i.i.d. The goal of this article, in contrast, is to give a theoretical analysis of a significantly more realistic setting for mini-batch SGD in which we consider:* The algorithm practitioners use:We analyze the algorithm most commonly used in practice to optimize large-scale neural networks, called SGD without replacement or sometimes random reshuffling, see <ref>. The key difference between SGD with and without replacement is that batches sampled without replacement are disjoint and form a random partition of the training dataset. These batches are, therefore, statistically dependent. This induces empirical differences, see, e.g., <ref>, and mathematical challenges, see <ref>. We also prove that it leads to qualitatively different behaviors than SGD with replacement, see <ref> and <ref>.* In real-world optimization regimes: Previous analyses of SGD made strong assumptions on the step size η, the number k of steps of SGD that are analyzed (usually one epoch at time), or the size of the derivatives of the loss L. For instance <cit.> asks thatη · k· ∇ L ≪ 1 η · k· ∇^2 L ≪ 1.The first hypothesis roughly requires that the total movement in parameter space is small and corresponds to allowing a local analysis for SGD. We will keep this assumption in the present work. However, we drop the second, see <ref>, that is often unrealistic, as we explain in <ref>. For instance, a well-known empirical observation about neural networks – known as progressive sharpening <cit.> – is that ∇^2 L often grows throughout training until it is of order η^-1.§.§ Informal Overview of the ResultsWe provide here an overview of our results with an emphasis on new insights into the differences between SGD with and without replacement. We also offer several theoretical insights that can help explain a range of phenomena empirically observed in training large neural networks.Our analysis is very general and does not make any assumptions about the learning task, model, or data set.We start with an informal statement of our main result. We show that, relative to full-batch GD, SGD without replacement implicitly adds a regularizer. This penalizes the covariance of the gradients or analogously the norm of the loss gradients, measured in a loss-dependent norm determined by the Hessian of the loss. In expectation over batch sampling, one epoch of SGD without replacement differs from the same number of steps of SGD with replacement or GD, by an additional step on a regularizer. At a stationary point, the step on the i-th parameter is step_i=- η/b-1 ∑_paramsj R_i,j · d/dθ_i_z ∈ D( ∇ L(θ, z) )_j,j Where R is a function of the Hessian. Away from a stationary point, there is an additional term dependent on the trace of the empirical covariance of the Hessian. Note that, in expectation, the trajectories of GD and SGD with replacement are the same. However, Theorem <ref> indicates that SGD without replacement is different, even on average. We also discuss how SGD with and without replacement have different variances, see <ref>. Our goal in the rest of this section is to describe the nature of the regularizer from Theorem <ref> and, in particular, the role of R in driving qualitatively different behaviors between SGD with and without replacement. As a starting point, let us consider a stationary point θ for the full-batch loss L(θ,D). The matrix R depends on θ and is a function of the Hessian of the loss ∇^2 L and its variance.The bias of <ref> corresponds to a sum of steps with learning rate η/b and a power of the Hessian as preconditioning matrix on some regularizers, all of the following formtrace(S ·_z ∈ D( ∇ L(θ, z) ))= _z ∈ D[ ∇ L(θ, z) - ∇ L(θ, D)_S^2 ]for some matrix S that is a function of the Hessian and inherits its eigenspaces. Let us work on a basis of eigenvectors {θ_1, θ_2, …} for the Hessian corresponding to the eigenvalues {λ_1, λ_2, …} and define the effective step size c = η k > 0. Then we have that the entries of R_i,j are R_i,j = 1/2∑_h=0^k+2 (-ηλ_i)^h∑_l = h+2^k kl (-ηλ_j)^l-h-2and they approximately correspond to the following quantitiesR_i,j∼c/2·1/2 constantifcλ_i, cλ_jsmall(cλ_i)^-1 smallifcλ_i ≫ 0, c λ_jsmall(cλ_j)^-1 smallifc λ_ismall, cλ_j ≫ 01 + (-ηλ_i)^k/cλ_i · cλ_jvery smallor very bigifcλ_i, cλ_j ≫ 0Ω((cλ_j)^-2exp(-cλ_j)) very bigifcλ_j ≪ 0Note that when cλ≪ 1, for all λs, we are in the setting (<ref>) of work such as<cit.> and indeed recover their core results, with the regularizer corresponding to c4_z ∈ D[∇ L(z) - ∇ L^2], see <ref> and <ref>. However, in our analysis, not only cλ need not be small in absolute value, but we allow the eigenvalues λ∈ [-O(1/c), 2/η + O(1)]. This setting includes a richer set of phenomena than found in prior work and contains most real-world training regimes. We discuss several phenomena that arise in the following section. §.§ Implications of Theorem <ref> §.§.§ Shaping the HessianWe find that <ref> is potentially enough to explainseveral empirically observed phenomena about implicit regularization, for instance that: * SGD converges to almost-global loss minima even though spurious minima of the loss exist <cit.>.* SGD shapes the spectrum of the Hessian of the loss, e.g., producing clusters of large outlying eigenvalues and sending small eigenvalues to zero in the course of training <cit.>. Note, indeed, that near a stationary point, the regularizer's step from <ref> is small (essentially zero) in all eigendirections of the Hessian of the loss corresponding to large eigenvalues for which cλ≫ 1 but ηλ≤ 1. In contrast, along the eigendirections associated with small positive eigenvalues, it is considerably larger. This implies that SGD without replacement implicitly regularizes mainly along the eigenspaces of the small and negative eigenvalues of the Hessian, which are considered to control (i) how the model overfits and (ii) the effective complexity and sparsity of the model. There, the regularizer tends to decrease the variance of datapoints _z ∈ D[ ∇ L(θ, z)-∇ L(θ, D)^2] of the loss gradients in two ways: * The algorithm finds better-fitting stationary points by reducing the squared residual on every data point. Such points are often global minima, or* The algorithm searches for points with smaller ∇_θmodel(θ,x)^2, usually indicating flatter stationary points. Furthermore, in various classification tasks, the regularizer corresponds to a weighted trace of the Fisher matrix (see <ref> and <ref>).This potentially explains and formalizes why <cit.> empirically observed that a higher effective learning rate η/b (corresponding to bigger steps on the regularizer in Theorem <ref>) better regularizes the trace of the Fisher matrix in vision tasks.This generally leads to consistent generalization improvements and makes the training less prone to overfitting and memorization <cit.>.In particular, thanks to <ref> we prove that, once most training points are correctly classified, SGD without replacement effectively minimizes a weighted trace of the Hessian. trace(S ·∇^2 L(θ, D))This is the first mathematical result, to the knowledge of the author, that explains and formalizes the observation of <cit.>. In particular, this potentially explains frequent observations that SGD tends to discover minima with a sparsified Hessian, characterized by a few outlying large eigenvalues (although smaller than η^-1) and many smaller eigenvalues near zero, see, e.g., <cit.>. §.§.§ Escaping saddles Theoretical works indicate that neural networks' loss landscapes have plenty of strict and high-order saddle points, see, e.g., <cit.>. Yet, surprisingly, SGD does not typically get trapped in these points in real-world applications <cit.>. Suppose θ is a strict saddle point for the loss L(θ, D) with negative eigenvalue λ < 0. We know that noise-injected GD and SGD with replacement escape a neighborhood of this saddle in respectively O(λ^-2) and O(λ^-3.5) steps under the so-called dispersive noise assumption, <cit.>.A key consequence of Theorem <ref> is that SGD without replacement escapes these saddle points much faster then these algorithms, taking only O(λ^-1) steps.The reason is that, as we see in Theorem <ref>, correlations between batches in SGD without replacement leads to a non-zero effective drift in the dynamics. In contrast, algorithms such as noise-injected GD and SGD with replacement are equal, on average, to vanilla full-batch GD and escape saddles due to diffusive effects coming from the noise. More precisely, suppose the Hessian eigenvector with eigenvalue λ≤ 0 has non-zero scalar product u with _z ∈ D[∇^2 L(z) ∇ L(z)] (i.e., there exists at least one data point whose regularizer's gradient has non-zero overlap with a negative eigenvector of the Hessian) and the third derivative of the loss is locally-bounded along the trajectory. Then, the step the algorithm makes on the regularizer at every epoch makes the expected trajectory of SGD escape the saddle approximately in:#epochs∼ln(η) + ln(u)/cλIn summary, we believe we made an important step in understanding why and how SGD without replacement escapes saddles so fast in practice. The reason is that it may be simply not affected by saddles, the saddles of the loss often are not saddles for the loss plus the regularizer. This is a fundamental difference from SGD with replacement, which is unbiased and thus escapes saddles only thanks to the diffusion of the noise.However, other points that were not saddles for GD may be saddles for SGD without replacement.§.§.§ Escaping the Edge of Stability Empirical work shows that often the Hessian steadily increases along the trajectories of gradient flow and gradient descent <cit.>. In the case of GD, this progressive sharpening stops only when the highest eigenvalue λ_max of the Hessian reaches 2/η - the so-called edge of stability. Then, GD starts oscillating along the eigenvector of λ_max while still converging along other directions <cit.>. However, <cit.> also observes that this is not the case of SGD without replacement, it follows the trajectory of GD leading to the Edge of Stability only up to a point where λ_max stops increasing.Similarly, <cit.> observed that, both in regression and vision classification tasks, the trajectory of SGD without replacement aligns with the one of GD for a while until a breaking point where it deviates and diverges from it.We make an important step towards explaining and formalizing these observations in <ref>. We prove that if there are at least two big eigenvalues and the covariance of the noise is non-zero along their eigendirections, then there exists a value, denoted by α_EoS, such that when λ_max > η^-1 + α_EoS, the size of a step on the regularizer of <ref> becomes bigger than k steps of GD.This may effectively lead the expected trajectory away from GD's one in most regimes.Agreeing with the empirical studies <cit.>, this phenomenon consists of a quick phase transition.Moreover, this breaking point arrives earlier for bigger effective learning rates η / b and later for smaller ones, indeed,α_EoS = b/η · (positive quantity).§.§.§ With vs Without Replacement. A subtle long-standing question is whether the fact that batches are disjoint and not sampled i.i.d. practically matters.In other words, if the induced dependency is strong enough for the trajectories SGD without replacement to get attracted to different minima than SGD with replacement or noise-injected GD. This is crucial to understand because, although SGD without replacement is faster and widely used in practice, most theoretical developments focus on GD or algorithms with independent steps. We believe this article makes a significant step towards understanding whether these algorithms' trajectories qualitatively differ.Our findings reveal qualitative differences in the trajectories of SGD with and without replacement through the loss landscape <ref>. While they may not necessarily converge to different minima, how they traverse the flatter areas of the landscape is distinct. This is in line with the experiments in <ref> and with how these algorithms handle saddle points. Precisely:* Just as in the neighborhood of strict saddles (see <ref>), SGD without replacement moves faster than SGD with replacement in regions of parameter space where the loss is nearly constant <ref>. This occurs because the implicit regularization is non-zero already on the level of the average of the trajectory (see Theorem <ref>).* We observe that SGD without replacement exhibits smaller oscillations (has smaller variance) both during the training and at near a loss minimum (see <ref>). In some settings, the variance is orders of magnitude smaller <ref>. This observation agrees with <cit.> who showed that for strongly convex objective SGD without replacement can have variance O(η^2) versus a variance of O(η) for SGD with replacement.* The regularizer of <ref> penalizes the variance of the gradients, i.e., the variance of the mini-batch noise, along the directions corresponding to smaller Hessian eigenvalues. SGD with replacement, however, may exhibit similar behaviors, yet due to an effect of a different nature. In conclusion, our results show that SGD without replacement implicitly regularizes by biasing the dynamics towards areas with lower variance. This effect results from the dependency between steps, manifesting as a drift-like phenomenon, distinct from effects attributable to the diffusion or, e.g., Fokker-Planck arguments. We demonstrate that this leads to a form of regularization, enabling the algorithm to navigate through flat areas more quickly and with fewer oscillations than expected.§.§ Outline of the Remainder of the Article We start with an overview of the problem in <ref>. Specifically, <ref> explores how neural networks are trained, the reasons for such approaches, and introduces SGD without replacement.In <ref>, we discuss our goals and where they stem from. <ref> Sheds light on the unusual mathematical challenges that we face while tackling the problem while reviewing the literature. We conclude this part with <ref> in which we discuss in what regime neural networks are trained, a relevant theory must explain what happens in this scenario. Following this introductory section, we present our main result in <ref>, which we later discuss more deeply in <ref>, with subsequent sections delving into its implications. In particular, we highlight the way the SGD without replacement regularizes the Hessian in <ref>, we investigate the way SGD escapes saddles in <ref>, the way it escapes the edge of stability in <ref> and the differences between the two variants of SGD in <ref>. We conclude the paper with a discussion of the applicability and the limitations of our results <ref>, the conclusions, and future directions.§.§ AcknowledgementsI want to thank Prof. Boris Hanin for his crucial support and help. A special thanks to Alex Damian, Prof. Jason D. Lee, and Ahmed Khaled for invaluable discussions that were key to my understanding of the topic and the development of this project.I thank Prof. Bartolomeo Stellato, Prof. Jianqing Fan, Giulia Crippa, Arseniy Andreyev, Hezekiah Grayer II, Ivan Di Liberti, Valeria Ambrosio, and Camilla Beneventano for their valuable advice and comments at various stages of this project. Special thanks to Prof. Misha Belkin, as the inspiration for this project was sparked during one of his mini-courses. I also warmly thank the participants and organizers of the "Statistical Physics and Machine Learning Back Together Again" workshop for the meaningful dialogues that have profoundly shaped this paper.§ THE PROBLEM§.§ Training Neural Network and the SGD How we train neural networks and why. Neural networks are commonly trained by optimizing a loss function using SGD without replacement, see, e.g., <cit.>, and its more refined variations like Adam <cit.>. SGD without replacement is indeed the default in widely utilized libraries such as PyTorch[<https://pytorch.org/docs/stable/optim.html>] and TensorFlow[<https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/SGD>]. Its widespread use results from early machine learning research that highlighted its practical generalization capabilities and its competitive computational complexity <cit.>.Indeed, SGD without replacement is much faster, and usually better, than Gradient Descent and SGD with replacement. Precisely:* It converges with fewer steps than, e.g., SGD with replacement on many practical problems, see <ref> and <cit.>. * The step itself is much faster. It just needs direct access to the memory, not sampling as for SGD with replacement <cit.>. For instance, in strongly convex settings SGD without replacement has better convergence than SGD with replacement. In particular, if the number of epochs is bigger than O(√(n)) then the convergence rate of SGD without replacement is faster and, asymptotically, is O(1/time^2), while the one of SGD with replacement is O(1/time).<cit.>. Additionally, the oscillation at convergence is often smaller for SGD without replacement, frequently as low asO(η^2) compared to O(η) for SGD with replacement, see <ref> and <cit.>.§.§ Implicit Regularization to Explain Generalization Generalization. Two primary objectives of machine learning theory are to understand why neural networks generalize and how they internally represent the data and functions they learn. Traditional generalization theories face various challenges in deep learning, see, e.g., <cit.>.Some of them being overparameterization <cit.>, benign overfitting <cit.>, or double descent phenomenon <cit.>. Consequently, these challenges necessitate a re-conceptualization of generalization theory, and probably an optimization-dependent one, <cit.>.Implicit Regularization. Indeed, an intriguing observation is that, in many relevant settings, different optimization procedures converge to global minima of the training loss. However, they consistently lead to different levels of test loss, or neural networks that represent functions of different kinds of regularity, see e.g. <cit.>. Moreover, <cit.> and previous works note that empirically the features of the optimization procedure and trajectory are among those that best correlate with generalization. This motivated the machine learning/optimization community to shift their interests. From studying convergence rates of algorithms in regular (such as convex) settings, to investigating the location of convergence for non-convex landscapes with multiple stationary points; that is, the implicit effect of the optimizer.§.§ Previous Work and ChallengesFor the reasons above, part of the research community focused on understanding the role of the optimizer and its interaction with the optimization landscape. The primary objectives are to (i) gain theoretical insights into the role of noise, the interplay between algorithms and particular landscapes, the effect of the discretization, and (ii) develop improved algorithms with better performance.The three ingredients that have a role in determining the training trajectory are * The landscape. That is, the properties that also the trajectory of the gradient flow (GF) has.* The discretization. Such as the phenomena that the GD dynamics shows but the GF dynamics does not.* The noise. That is, what changes from GD to SGD or from GF to a related SDE.There are various important mathematical challenges in trying to perform an analysis of the trajectory of SGD. Most of them are regarding the fact that, in this setting, some very standard assumptions are not satisfied. We introduce and list them in what follows.The role of the landscape. Some phenomena occurring during the training of large-scale machine learning models are more influenced by the geometry of the landscape rather than the action of discretizing the gradient flow or the noise.At first impact, in modern machine learning, the loss landscapes seem really not to satisfy any standard regularity condition:(i) they are not even locally convex <cit.>; (ii) many spurious local minimizers are present <cit.>, and (iii) there are also many strict and high-order saddles <cit.>, even though the used algorithms empirically tend to skip them <cit.>, <cit.>, <ref>.This means that, generally, it is not possible to leverage a strong characterization of the geometry of the manifold on which the trajectory lies, unlike, e.g., for convex optimization.* The only feasible way to analyze the trajectory is locally. * Moreover, the local analysis may have no implications on any global convergence.That said, people tried to investigate the properties of the landscape and the behavior of GF, GD, and SGD that is particular to these landscapes <cit.>. One difficulty in dealing with the landscape is, e.g., that it is not always possible to control the effect of nonlinearity. For instance, in ReLU networks, predicting the landscape after the activation pattern changes is challenging.Moreover, it has been proven that for common losses the gradient flow is guaranteed to converge towards infinity, but only in certain "nice" directions <cit.>.Analogously, a well-known effect due to the shape of the landscape is progressive sharpening <cit.>, where the main eigenvalue of the Hessian steadily increases during training until reaching extremely high values O(η^-1). A very interesting phenomenon known as mode connectivity has also been observed: local minimizes, and stationary points in general, of a neural network's loss function are connected by simple paths <cit.>, particularly in overparameterized models <cit.>.All this means that to optimize these functions we need algorithms that can escape saddles quickly and can travel through flat areas fast towards a better generalizing minimum. GD does not do it. SGD, instead, empirically seems particularly suitable for these landscapes (<ref>), as our results show, e.g., <ref>.The role of the discretization. Some of the effects of SGD are due to the discretization process. A moderately large learning rate indeed is empirically observed to yield better generalization performance <cit.>. This means that the effects of discretization and noise are generally benign. These effects include the one described in <cit.>, known as the edge of stability: often GD will start oscillating around the manifold of minima instead of following the gradient flow, while anyway steadily converging <cit.>.All this suggests that a relevant analysis must allow for: * Non-vanishing yet finite learning rate.* A large Hessian, or alternatively, (learningrate) · Hessian > 1. In classical numerical analysis, the effects of discretizing differential equations have been studied for long. A useful tool for that <cit.> is backward error analysis, introduced with the pioneering work of <cit.>. While in the 1990s it was used to study the stability of the discretizations, these techniques have been recently used to tackle implicit regularization of GD <cit.>, SGD without replacement <cit.>, SGD with momentum <cit.>, Adam <cit.>. Unfortunately, this line of work usually applies to small step sizes. This article attempts to get around this limitation. The role of the noise. The noise of SGD has been considered by numerous authors as a potential factor for explaining generalization in neural networks. Empirically, both the size of the noise <cit.> and the shape or direction of the noise indeed play crucial roles <cit.>. That said, despite extensive research on this phenomenon, it remains an enigmatic aspect of deep learning theory and needs further investigation.* The noise at every step is not Gaussian, does not admit lower bounded variance, etc. In particular, it vanishes in the global minima and it has a shape and a structure. What we do know from physics and mathematics is that the fact of having random diffusive fluctuations implicitly biases the dynamics towards flatter areas. This is a well-known phenomenon in physics called thermophoresis, and it is mathematically explained with Fokker-Plank-like equations. There are many works in the ML community about this kind of effect, most notably <cit.>. That said, these papers are about dynamics where the noise is regular and well-behaved, e.g. every step is independent, the noise does not vanish along the trajectories, etc. In particular, they are dealing with injections of Gaussian noise or dynamics assimilable to geometric Brownian motions. While some of these results, in some settings, it is claimed to apply to SGD with replacement, to the knowledge of the author it was proved neither mathematically, nor empirically. Moreover, as far as we know, there exists no diffusion-related result applicable to SGD without replacement, nor any empirical observation that that may be the case.Probably the most common approach in deriving the effect of the noise is by taking the limit SDEs derived from gradient flows. This is the case of, e.g., <cit.>. Unfortunately, SDE approximation limit is always ill-posed <cit.> and on top of that never behaves as SGD apart for extremely particular cases, as very small learning rate for scale-invariant neural networks in a setting in which the variance is bigger than the gradient<cit.>. Moreover, <cit.> questioned the applicability of CLT in this context, so we are not even sure in what terms we can speak of these diffusion-powered effects.On top of all the above, <cit.> showed the differently shaped noises (Langevin dynamics, label noise, etc.) converge to different minima even though as the effect of diffusions. On top of these limitations of the approach, all this work is only about SGD with replacement.Finally, <cit.> provides further empirical and theoretical evidence that this SDE approach should not work in explaining the effect of SGD without replacement. Moreover, <cit.> show that even in the case of diffusive independent noise at every step, as label nose injection or SGD with replacement, the implicit regularization effect is due to drifts coming from the higher order terms in Taylor, not from the diffusion-powered regularization. Thus, to understand the true impact of SGD noise on neural networks, a relevant analysis should account for the fact that: * The noises of different steps are not independent, nor centered.In particular, the randomicity of SGD without replacement is smaller than what people expect. In SGD without replacement, the noise comes from reshuffling the dataset once every epoch, not from sampling at every step. The batches are disjoint, thus dependent, and given the first k-1 the k-th is deterministic. Another way to look at this is to think that our steps, or the observations of the random variable mini-batch gradient, are exchangeable but not independent. There exists plenty of work on the (dis)similarity of a set of exchangeable vs independent random variables, starting with the De Finetti theorem.§.§ The Real-World Regime The product c := η· k between the learning rate and the number of steps in an epoch plays a crucial role in the effect of SGD. Empirically, for instance, studies found that the size of the SGD effect scales with c, or analogously with η/b-1 = c/n.This is the so-called linear scaling rule in the literature <cit.>. Predictably, c comes out in our regularizer too, see <ref> or <ref>, for instance. From a theoretical point of view, the reason is that we can rewrite the Taylor expansion as sum of products of c times the derivatives. Precisely, the factors appearing are of the form c ·∇^i L, so for instance the first terms can be rearranged as-η k ∇ L +η^2 k2∇^2 L ∇ L-η^3 k3 (∇^2 L)^2 ∇ L +…In every term η and k appear multiplied to the same order, thus this can be rewritten approximately as-c ∇ L +c^2/2∇^2 L ∇ L-c^3/6(∇^2 L)^2 ∇ L +…With an approximation error of O(1/k). For this reason, most analyses require c = η· k to be much smaller than 1, or analogously c ∇ L, c ∇^2 L ≪ 1. This is the case for instance of <cit.>.In practice, however, this is not usually the case and c or c∇^2 L are not small.As an example, MNIST and CIFAR10 datasets have, respectively, n=60,000 and n=50,000 training examples.With a batch size b of 32 or 64, we would have values of k ranging between 800 and 1900, so of order of magnitude 10^3. Common choices for η range from 1 or 0.1 at the beginning of training, decreasing to 0.01 or 0.001 towards the end. This results in values of c=η· k ranging between hundreds initially, to values between 0.8 and 10 later on.Dataset ModelTraining set size nStep sizes η Batch sizeb c=η· k MNIST[1] MLP 60k 0.1 - 0.01 32-64 190 - 9.4 Cifar10(0)[2] DenseNet-BC-190[4] 50k 0.1 - 0.001 64 78 - 0.8Cifar100 [2] ResNet-BiT[5] 50k 0.03 - 0.0003 4096 0.36 - 0.004ImageNet[3] ResNet152[6] 1.2M 0.1 - 0.001 256 470 - 4.7[1] <http://yann.lecun.com/exdb/mnist/> [2] <https://www.cs.toronto.edu/ kriz/cifar.html> [3]<cit.> [4]<cit.> [5]<cit.> [6]<cit.>Note that in most of the cases in the table above, SGD is equipped with momentum with β = 0.9. This implies that the effective step is bigger:η (1 + 0.9 + 0.81 + …)=10 ·ηmaking the effective c ten times bigger.The only element of the form c ∇^i L that we can be sure tends to become small during the middle and later stages of training is thus η k ∇ L, as the gradient approaches zero and the learning rate, η, gets annealed. However, the same can not be said of the second or the third derivative <cit.>.§ IMPLICIT BIAS OF SGD WITHOUT REPLACEMENTMany past studies have shown how different algorithms perform in different settings, and which minima they converge to. We want to understand to what extent these results can or can not be applied to SGD without replacement. After setting the notations in <ref>, we investigate the expectation of the deviation between trajectories of SGD without replacement and other algorithms in <ref>. Technically our analysis works as follows, see <ref>: * Deviation: We first compute the deviation between trajectories <ref>.* Moments: Then we determine its moments with the machinery developed in the appendix <ref>.§.§ Notations We denote by D = { z_i }_i = 1, 2, …, n⊆ V the training set of size n. We have a parametric loss function L(ϑ, z) ↦ L(ϑ, x) ∈ that takes as input the parameters θ∈Θ and the data z ⊆ V and outputs a real number, admitting 3 weak derivatives in the parameters ϑ. For every set B ⊆ V we define L(ϑ, B) := 1|B|∑_z ∈ B L(ϑ, z). The goal of the optimization procedure is to find a minimum θ^* ∈Θ for the objective function L(ϑ, D).For readability, we will often omit the inputs of the function L. Precisely, whenever we omit the parameter ϑ we are evaluating the loss at the beginning of the epoch, we will denote the value of the parameters at the beginning of the epoch with θ. Whenever we omit the set of inputs we are evaluating the loss over the whole training set D. Moreover, all the derivatives we will take will be in the parameters ϑ and all the expectations will be empirical expectations over a set B ⊆ V. As an exampleLmeans L(θ, D) ∇ L(B)means ∇_θ L(θ, B). Moreover, we will denote by θ_i^SGD the parameters after the i-th step of SGD without replacement or, analogously, of Shuffle Once[Shuffle Once is the version of mini-batch SGD where the training set is shuffled at the beginning of the training, partitioned, and at every epoch uses the same batches in the same order.] (SO) starting from θ_0^SGD = θ with learning rate η > 0. In this section, we will take also other algorithms in exam, such as SGD with replacement, GD, or GD with noise injected. Analogously to SGD without replacement, we will denote by θ_i the parameters after the i-th step of the other algorithms in exam, with initialization θ_0 = θ and the learning rate η > 0.Let b ∈ be the batch size and k ∈ the number of optimization steps considered at once. Generally, k ≤ n/b will be, and can be thought of as, the number of batches or the number of steps in an epoch. We denote by B_i ⊆ D the batch used in the i-th step, i ≤ k. Denote by c:= η k and by H : = c^2/2n( [(∇^2 L)^2] - (∇^2 L)^2 ). Finally, we denote by A^† the Moore-Penrose inverse of the matrix A and, for a PSD matrix S, we denote v_S^2 := v^⊤ S v. For a vector or matrix we denote the components with the indices in subscripts, e.g., for A ∈^d× d, for i,j ≤ d, we denote A_i,j the (i,j) component of the matrix A.§.§ The Trajectories are Biased One epoch or less at once, we analyze the optimization dynamics. We show that relative to full-batch GD or SGD with replacement, SGD without replacement and shuffle once implicitly add a step on a regularizer that penalizes a function of the variance of the loss gradients. It is important to note that this is not the first such result. Indeed <cit.> showed earlier that SGD has an implicit bias and gave some insights on the generalization benefits of it, while <cit.> showed that in a small learning rate regime, precisely c∇^2 L ≪ 1 and H ≪ 1, SGD without replacement in expectation follows a gradient flow on a modified loss.The regularizer we find is essentially the sum of variances of the gradient noise measured in a loss-dependent norm determined by the Hessian of the loss.Precisely, <ref> analyzes the deviation of any mini-batch algorithm from GD, <ref> and <ref> specializes to SGD without replacement and to shuffle once. Precisely, <ref> works in the general case, while <ref> is cleaner but specializes in the setting in which η∇^2 L has only small and big eigenvalues, not in between.In the notations of <ref>, let θ_i be the parameters after i steps of any algorithm with independent steps centered on the GD step. These being, e.g., SGD with replacement, GD, label noise GD, Gaussian noise injected GD, etc. Let us assume that η k ∇ L(.) ≪ 1. Then up to a multiplicative error of size η^α k ∇ L + 1/n with α≥ 1.In expectation over batch sampling, k steps of SGD without replacement differs from the same number of steps of SGD with replacement or GD, [θ_k] by additional preconditioned steps with learning rate ηb-1 on some regularizers._[ expectation over batches' sampling][θ^SGD_k] = [θ_k]- η/b-1 ∑_i = 0^k-2 A_i ∇Regularizer_iwhere A_i = (-η∇^2 L)^i and Regularizer_i= _z ∈ D[ ∇ L(z) - ∇ L_S_i]+ ∇ L^⊤𝒮_i ∇ L .Where the matrices S_i, i ≤ k-2, and 𝒮 are series of powers of the loss ∇^2 L and H.Analogously, let us fix an orthonormal basis {θ_1, θ_2, …} of eigenvectors for the Hessian ∇^2 L. In the case in which H ≪ 1 at θ stationary point, we can rewrite this biasalong the eigenvector θ_i corresponding to the eigenvalue λ_i as- η/b-1 ∑_j R_i,j d/dθ_i Cov_z ∈ D(∇ L(z))_j,jwhereR_i,j = 1/2∑_h=0^k+2 (-ηλ_i)^h∑_l = h+2^k kl (-ηλ_j)^l-h-2.Note that the entries R_i,j are approximatelyR_i,j∼c/2·1/2 constantifcλ_i, cλ_jsmall(cλ_i)^-1 smallifcλ_i ≫ 0, c λ_jsmall(cλ_j)^-1 smallifc λ_ismall, cλ_j ≫ 01 + (-ηλ_i)^k/cλ_i · cλ_jvery smallor very bigifcλ_i, cλ_j ≫ 0Ω((cλ_j)^-2exp(-cλ_j)) exponen- tially bigifcλ_j ≪ 0 We can now leverage this way to write a corollary that corresponds to <ref>:In the notations of <ref> up to a multiplicative error of size η^α k ∇ L + H+1/n with α≥ 1.Let us work in the coordinate of a basis of eigenvectors {θ_1, θ_2, …} for ∇^2 L. After k steps of SGD without replacement we have k steps of GD plus an additional step as follows. At a stationary point, the step due to the bias on θ_i corresponding to the eigenvalue λ_i is-η/b-1 ∑_j R_i,jd/dθ_i_z ∈ D( ∇ L(z))_j,j .The proof of these results can be found in the appendix. We will give a deeper look into the nature of the regularizer in <ref> and of the matrices S_is in <ref>. We deal with the error and the regimes in which <ref> apply in <ref>.§.§ Small Learning Rate or Small Hessian Regime. Assume that the full-batch Hessian c∇^2 L is small, but we have arbitrary "variance" H, and θ at the beginning of the epoch is close to a stationary point. Then we can conclude that the regularizer's step coincides with -η/b-1∇_z ∈ D [ ∇ L (z) - ∇ L_S_0]where S_0 isS_0= c/4 H^-1/2√(π) erf(√(H)) ( 2+ 2H^-1/2(e^-H-1) - c∇^2 L) .When H ≪ 1 this becomes approximately S_0 = c/4I and the regularizer is c/4∇ L.This aligns with prior theoretical findings by <cit.> and empirical findings by <cit.>. More about the exact formula of the matrices S_i can be found in <ref>. § SHAPING THE HESSIAN In this section, we show that SGD without replacement implicitly regularizes by the variance of the gradients.We find that <ref> is potentially enough to explain several empirically observed phenomena about implicit regularization, e.g., that: * SGD converges to almost-global loss minima even though spurious minima of the loss exist <cit.>.* SGD shapes the spectrum of the Hessian of the loss, e.g., producing clusters of large outlying eigenvalues and sending small eigenvalues to zero in the course of training <cit.>. We show indeed that it biases the dynamics in areas where there is (i) lower variance, (ii) the landscape is flatter, (iii) the model better fits the data, and (iv) it oscillates less. Moreover, we later give heuristics about how it regularizes the spectrum of the Hessian/Fisher Matrix.§.§ Empirical ObservationsOne of the papers that started the whole line of research on the implicit bias of algorithms was <cit.>. They observed that in the experiments a lower batch size led, generally, to a wider (or less sharp) minimum. Precisely, they observed that the minima found by lower batch sizes have an Hessian with a smaller number of large eigenvalues, or a smaller trace.Analogously, we find that the small eigenvalues that do not carry information about the problem get zeroed out. A recent paper, <cit.>, shows experimentally that big learning rate SGD has an effect similar to penalizing trace( [ ∇ L∇ L^⊤] )= [ ∇ L^2 ]that in the image classification task in which they work coincides with the trace of the Fisher matrix. In related settings, the Fisher matrix has been shown to approximate the Hessian during the training; in particular, there is an overlap between the top eigenspaces of the Hessian and its eigenspaces <cit.>. Furthermore, <cit.> shows that, in practice, penalizing it consistently improves generalization, reduces memorization, and regularizes the trace of the final Hessian. Moreover, the advantages of penalizing the trace of the Fisher matrix are even stronger when in the presence of noisy labels. There exist multiple other studies along these lines, and in particular corroborate our findings. Some recent ones are <cit.> who observe that a bigger learning rate "prevents the over-greedy convergence and serves as the engine that drives the learning of less-prominent data patterns", this aligns with the regularization phenomenon that we unveil. Analogously, <cit.> showed how full-batch GD strongly regularized perform as SGD explicitly using a penalization very similar to the regularizer we find and in line with <cit.>. §.§ The RegularizerSGD without replacement regularizes the noise covariance in the eigendirections corresponding to smaller eigenvalues. Near a stationary point, the regularizer step is approximatelyη/b-1 · ∑_i=0^k-2(η∇^2 L)^i·∇_θ( trace( S_i ·_z ∈ D ( ∇ L (z) ) ) ) .We can thus interpret it as a sequence of steps, one for each i, of size η/b with preconditioning matrix (η∇^2 L)^i on the regularizers_z ∈ D[ ∇ L(z) - ∇ L_S_i^2 ]= trace( S_i ·_z ∈ D ( ∇ L (z) ) ).The matrices S_i from <ref> are small (essentially zero) in all eigendirections of the Hessian of the loss corresponding to large positive eigenvalues. In contrast, in the eigendirections associated with small positive eigenvalues, the S_is are considerably larger. Precisely, if for a certain eigenvalue λ of the Hessian ∇^2 L we have |cλ| ≪ 1 we can essentially look only at the term with S_0 and we have that the approximate regularizer along the eigendirection of λ coincides with a penalization of the noise covariance or analogously of the norm of the gradientsc^2/4_z ∈ D[ ∇ L(z) - ∇ L^2 ]= c^2/4trace( _z ∈ D( ∇ L(z) ) ) .On the other hand, in the eigendirections of big eigenvalues cλ≫ 1, but ηλ≤ 1, the effect of the regularizer still penalizes the noise covariance but with an intensity that is continuously decreasing in λ. The size of the step shrinks as O(λ^-1) becoming essentially very small when the big eigenvalues approach big O(η^-1) values.This implies that SGD without replacement implicitly regularizes mainly along the eigenspaces of the small and negative eigenvalues of the Hessian, which are considered to control (i) how the model overfits and (ii) the effective complexity and sparsity of the model.§.§ Variance, Global Minima, and Flatter Models We showed that the regularizer acts by reducing the variance in the eigendirections of the smaller eigenvalues. We discuss here a correspondence between optimizing the regularizer and finding better minimaSmaller varianceSmaller norm of the gradientsBetter fitting and/or flatter minima.In the previous subsection we showed that when θ is close to a manifold of stationary points, the regularizer pushes the trajectory towards areas with lower variance.It is important to note that the quantity we dealt with has many different interpretation trace([∇ L(z)∇ L(z)^⊤])= _z ∈ D[ ∇ L(z) - ∇ L^2 ]_Noise Variance = _z ∈ D[ ∇ L(z)^2_Gradients' Norm] In particular, let the loss be of the form L = ℓ∘ f and denote by ∇ℓ = d ℓ(z)/dz|_z = f(θ,x) the residuals. E.g., L(θ, z=(x,y)) = f(θ, x) - y^2 with ∇ℓ = f(θ,x)-y for MSE. We have that_z ∈ D[ ∇ L^2 ]= _z ∈ D[ ∇_θ f^⊤∇ℓ^2 ∇_θ f ] .So at every step, we penalize what essentially is the sum over the output components i of products of [∇ f]_i^2 and [∇ℓ]_i^2, i.e., the residuals and norm of the gradient of the model on the components of the output. When the dimension of the outputs is one, this is exactly ∇ℓ^2 ∇_θ f^2. This corresponds to lowering the size of ∇ℓs and/or the size of ∇_θ f. Thus if we lower the variance we lower this product and if we lower this product we lower the variance. This implies that, with different weights given by the PD matrices S_is, the regularizer pushes towards areas where either ∇ℓ^2 or ∇_θ f^2 are smaller. The regularizer pushes towards locations in parameter space where either: * The squared residuals ∇ℓ^2 are smaller. I.e., it escapes towards a better-fitting stationary point, as a global minimum, or* The gradients of the model ∇ f(θ,x)^2 are smaller. This, in turn, may correspond to wider valleys and may correspond to a smaller norm of the gradient in the inputs x of the function f. Thus potentially finding a better generalizing minimum by making the function represented by the neural network less oscillating and less prone to overfitting and memorization.This effect was not observed in previous works, to the knowledge of the author and is the opposite of what we expect from full-batch GD, since <cit.> observed that along the trajectory of ∇ f(θ,x)^2 keeps steadily increasing. Nonetheless, it agrees with empirical observations.§.§ Regression and Oscillations Variance and Hessian in Regression. Note that, in the case L = ℓ∘ f, the Hessian of the loss ∇^2 L consists of two parts, precisely:∇^2 L= ∇^2 ℓ ∇ f∇ f^⊤ +∇ℓ ∇^2 f.The first summand has linear combinations of ∇ f as eigenvectors and linear combinations of ∇ f^2 as eigenvalues.If the ∇ fs are orthogonal, those are exactly the eigenvalues and eigenvectors, this is the case of orthogonal data in shallow linear networks, for example. Analogously, the second part is of size ∇ℓ and when the algorithm converges, in general, it is observed to vanish. In particular, it is 0 at a global minimum. This means that SGD without replacement implicitly regularizes on the Hessian matrix. Precisely, it prefers sparse Hessians to approximately sparse Hessians, making smaller the already small eigenvalues. Moreover, this also implies a shrinking in the input gradient of the function ∇_x f(θ,x) as those are related <ref>.§.§ Classification: Trace of the HessianTowards explaining the empirical observations. <ref> potentially explains and formalizes why <cit.> empirically observed that a higher effective learning rate η/b (corresponding to bigger steps on the regularizer) better regularizes the trace of the Fisher matrix, in vision classification tasks. In particular, thanks to <ref> we prove that, once most training points are correctly classified, SGD without replacement effectively minimizes a weighted trace of the Hessian. Indeed the regularizer corresponds to trace(S· [∇ L(z)∇ L(z)^⊤]_Empirical Fisher Matrix)= trace(S· ∇^2 L(θ, D))This is the first mathematical result, to the knowledge of the author, that explains and formalizes the observation of <cit.>. Moreover, this potentially explains frequent observations that SGD tends to discover minima with a sparsified Hessian, characterized by a few outlying large eigenvalues (although smaller than η^-1) and many smaller eigenvalues near zero, see, e.g., <cit.>.Hessian corresponds to Fisher. <cit.> showed that the trace of the Hessian is approximately the trace of the Fisher matrix along the learning trajectory of image classification tasks.The surprising part of <cit.> is that this is the case along most of the trajectory and not only in the final part, as shown by the following theorem. This means that generally in the first part of the training SGD already correctly learns the vast majority of the labels and the major part of the learning is just regularization by SGD and other elements of the training. Assume all the training data are correctly classified and we use cross-entropy loss. ThenFisher= _z ∈ D [ ∇ L ∇ L^⊤]= ∇^2 L=Hessian.In particular, the local minima of trace(S ·Fisher) coincides with the local minima of trace(S ·Hessian). The regularizer penalizes the trace of the Hessian.<ref> and <ref> imply that when θ is around the manifold of minima for the loss, the regularizer of <ref> acts directly on the Hessian. Indeed, it corresponds in this setting to a number of steps of size η/b preconditioned by (η∇^2L)^i on trace(S_i · Cov(∇ L) ) = trace(S_i ·∇^2 L ).This means that SGD without replacement biases the dynamics by regularizing the trace of the Hessian, trace(∇^2 L), and in particular it regularizes more heavily the small eigenvalues.Concluding. SGD without replacement does not simply penalize the trace of the Fisher matrix or Hessian, but rather targets a weighted trace, with a focus on diminishing the smaller eigenvalues while leaving the larger ones relatively unaffected. This approach aligns with empirical findings where the Hessian's spectrum typically features a few large eigenvalues amidst many smaller ones (bulkization) <cit.>. The larger eigenvalues, carrying crucial information about the problem <cit.>, are preserved, whereas the smaller eigenvalues, often associated with overfitting and memorization, are effectively targeted by this regularization approach. Consequently, this technique may enhance generalization beyond the simple trace of the Hessian or Fisher matrix. Additionally, this agrees with the observations in <ref> and what is illustrated in <ref>. § SADDLESWe believe we make an important step in understanding why and how SGD without replacement escapes saddles so fast in practice. The reason is that the regularizer is simply not affected by most saddles. This is a fundamental difference between SGD with replacement and noise GD, which escape saddles only thanks to the diffusion of the noise. §.§ Literature ReviewSaddles are there. Many theoretical works deal with the presence of saddles in the loss landscape of neural networks. For instance <cit.> proved that in the landscapes of shallow linear networks, all the stationary points that are not global minima are saddle. Later, <cit.> proved the same for deeper networks, under more general assumptions on the data, showing also the presence of higher-order saddles. Later, plenty of work, as <cit.>, characterize large families of saddles (and local minima) in the loss landscape, for example in terms of stationary points of embedded smaller neural networks. Escaping saddles with noise. Many influential works on the optimization side observed how SGD often empirically escapes them <cit.>, even though the time required by GD to escape them may often be exponential <cit.>.An important number of influential theoretical work was produced on trying to explain why and how fast variants of SGD escape (at least the strict) saddles and developing new algorithms that provably escape faster. For instance, <cit.> proved that almost surely GD escapes saddles asymptotically, and <cit.> proved that injecting Gaussian noise in the gradients makes GD escaping in O(λ^-2) time.A diffusion-powered escape. The conclusion of many works is that a noised version of GD can escape saddles if the noise is dispersive, a concept essentially analogous to having the covariance of the noise positive in an escaping direction. This is for instance the case of Gaussian noise injection. In that case, with a high probability, the noise will eventually shoot the trajectory away from the saddles. Away enough that the gradient descent part of the algorithm will have a considerable size <cit.>.Following this idea, under the dispersive noise assumption, <cit.>, <cit.>, <cit.>, and later <cit.> proved that SGD with replacement escapes saddles in O(λ^-3.5) time, for proper choices of hyperparameters. The phenomenon described by these works is diffusive in nature, the reason why SGD with replacement is escaping is about the variance term of the i.i.d. noise of each step. We are not aware of work on SGD without replacement escaping saddles. We, however, conjecture that it is possible to obtain a result similar to the ones above. §.§ SGD Without Replacement Escapes FasterWe show here that SGD without replacement escapes saddles as well. However, our result is very different in nature from the ones produced in the past. We discover a drift-powered escaping effect, not a diffusive one:SGD without replacement, unlike the other algorithms, escapes saddles simply because the implicit step on the regularizer does not vanish there. The regularizer biases the trajectory in escaping directions, if any is spanned by the gradients. The great news of this section is thus not that SGD may escape saddles, that was known already. The novelty is the way and the speed in which SGD escapes saddles. It is the nature of the effect that makes SGD without replacement escape saddles. SGD without replacement is thus empowered by two weapons against the saddles issue: (i) we believe that in case of dispersive noise, it escapes with a similar dispersive machinery than <cit.>, although we are not aware of works in this direction; and (ii) the regularizer induces a drift-like effect biasing the trajectory towards escaping directions. The interaction and coexistence of these two effects make SGD without replacement escape faster. Moreover, SGD without replacement escapes even when initialized exactly at saddle points.Let θ be a higher-order saddle for the loss L. Let v be an eigenvector of the negative eigenvalue λ < 0 of the Hessian of the loss. Assume u:= 1/n∑_z ∈ D⟨ v, ∇^2 L(z) ∇ L(z) ⟩≠ 0.Let us assume also that the third derivative is bounded in a neighborhood of the trajectory. Then, SGD without replacement escapes saddle, i.e., the loss is at least O(1) smaller, after#epochs >2ln(η) + ln(u) + 2ln(c) - ln(b) /cλ. Analogously, SGD without replacement travels flat regions like neighborhoods of high-order saddles thanks to the steps on the regularizer. Let v be a vector in the kernel of the Hessian of the loss. Assume v=1 and u:= 1/n∑_z ∈ D⟨ v, ∇^2 L(z) ∇ L(z) ⟩≠ 0.Let us assume also that the third derivative is bounded in a neighborhood of the trajectory. Then, SGD without replacement travels distance 1 in the direction of v, i.e., ⟨θ^SGD_* - θ, v ⟩ = 1 in a number of epoch#epochs = 2b/η c^2 u. And if escaping directions for high-order saddles are spanned by the updates, we escape a higher-order saddle in the same amount of time.This is very surprising as it says that no matter the order of the saddle SGD without replacement travels the region with the same speed. Let θ be a higher-order saddle for the loss L. Let v an escaping direction in the kernel of the Hessian of the loss. Assume u:= 1/n∑_z ∈ D⟨ v, ∇^2 L(z) ∇ L(z) ⟩≠ 0.Let us assume also that the third derivative is bounded in a neighborhood of the trajectory. Then, SGD without replacement escapes saddle, i.e., the loss is O(1) smaller, after#epochs > 2b/η c^2 uindependently of the order of the saddle.§.§ Where is the Catch?Conflicting results. The findings of this section appear to conflict with previous work demonstrating the difficulty of escaping saddles. A lot of work in the past, indeed, focused on the difficulty of escaping saddles with gradient-based algorithms, highlighting inherent challenges and inefficiencies of gradient-based methods in navigating the landscape of non-convex optimization. For instance, it has been shown that it is NP-hard to find a fourth-order local minimum <cit.>.Moreover, GD has been shown to potentially take exponential time to escape from saddle points <cit.>. This slowdown occurs even with natural random initialization schemes and non-pathological functions. Why SGD does it so fast. SGD without replacement, however, may escape saddle points very quickly as we showed above. The reason why this makes sense is that it is biased. The bias makes it behave as if the loss was not anymore L but L plus a penalization P. Thus in a way SGD without replacement implicitly sees the landscape differently and what are saddles for GD on L may not be saddles at all for SGD without replacement. It is important to notice that <ref> could already be proved starting from the main result of <cit.> in the setting in which that result applies. The limitation. The saddles that we skip are not saddles from the point of view of the algorithm. This, in turn, comes with new challenges. Indeed, while some saddles are not saddles for SGD, some points that were not saddles for the loss may be seen as saddles by SGD, precisely, those points in which the push of the regularizer is exactly the opposite of the push of gradient descent. For those, all the previous negative results in theory apply. § AT THE EDGE OF STABILITY §.§ Empirical Observations The Hessian grows.Another interesting phenomenon is the edge of stability, as known from <cit.>. Theoretical work in the setting of classification showed that often gradient flow converges towards infinity in the parameter space. Precisely, in certain settings full-batch GD converges towards infinity but in certain particular directions <cit.>. Empirical work similarly shows that gradient flow and gradient descent move towards areas where the Hessian of the loss gets bigger and bigger. Precisely, <cit.> showed that along the trajectories of GD and Adam the largest eigenvalue of the Hessian steadily increases. This stops only when it reaches 2/η for GD and 38/η for Adam, as those are the instability thresholds for the algorithms. SGD induces a smaller Hessian. However, <cit.> observed that this is not the case for SGD. <cit.> claims that the reason why SGD does not train at the edge of stability is that the implicit regularizer due to the noise explodes close to the boundary λ_max= 2/η working as a log-barrier. <cit.>, instead, observed that in both regression and vision classification tasks, the trajectory of SGD aligns with the one of GD for a while until a breaking point where it diverges from it and goes in areas of the parameter space where the trace of the Empirical Fisher Matrix is substantially lower. This breaking point arrives earlier for bigger effective learning rates η / b and later for smaller ones. In what follows, we formalize this observation. §.§ Why SGD Does not Reach the Edge of StabilityIn summary, what we find is that the divergence after the breaking-point observed by <cit.> is due to the bias of SGD without replacement that we unveiled in <ref>. In most regression settings, however, we observed empirically the log-barrier-like effect, as explained by <cit.>. We can not explain it with our theory and we conjecture that it is due tohigher-order terms of Taylor or to the diffusion. Breaking-point: when and why. At the beginning of training usually, the size of gradients decreases steadily while often the Hessian grows. This means that, soon after the beginning, the training often enters a regime in which we can apply <ref>. In the case in which there are at least 2 eigenvalues λ_1, λ_2 > η^-1, the regularizer's step from <ref> along θ_1 eigenvector of λ_1, see <ref>, takes the following form up to an exponentially small term.η/b-1[c/21 + (-ηλ_1)^k/(cλ_1)(cλ_2)]d/dθ_i_z ∈ D (∇ L (z) )[2,2] + Other terms of the regularizer.Or analogously, by exchanging the indexes, we obtain the update on θ_2.This means that Assume we are in the hypothesis of <ref>. Assume there exist λ_1, λ_2 eigenvalues of the Hessian ∇^2 L with eigenvectors θ_1, θ_2 such that λ_2 > η^-1, u := ∇_z ∈ D( ⟨θ_2, ∇ L(z) ⟩ ) > 0, and λ_1 ≥η^-1 + α_EoS withα_EoS := 1/c| ln( c^2 λ_1 λ_2 ∇ L/u) |> 1/c| ln( c^2 ∇ L/η^2 u) | .Then the additional step on the regularizer described by <ref> is bigger than k steps of GD.The proof of this proposition is immediate after imposing that the quantity in <ref> is bigger than k steps of GD starting from θ. This phenomenon closely agrees with what was observed empirically by <cit.> where they call the moment of the phase transition "the break-even point", as for the title of that paper. Dependence on η/b. Moreover, agreeing with the empirical observations by <cit.>, we see that α_EoS(η,k) is a monotonic decreasing function of η/b, precisely it goes like its inverse.α_EoS := 1/c·( positive_η)= b/η·1/n·( positive_η)= b/η·( positive_η).Phase transition. Moreover, this coincides with a phase transition, not with a slow continuous process, as empirically observed by <cit.>. Indeed, with ϵ :=|ln(η)|/c> 0 we haveSize of the step on the regularizer = Θ(η·GD step)if λ_1 = η^-1 + α_EoS - ϵ Θ(GD step)if λ_1 = η^-1 + α_EoS Θ(η^-1·GD step)if λ_1 = η^-1 + α_EoS + ϵNote that ϵ is usually O(1) so much smaller than η^-1. For instance, see <ref>, second line of the table, for cifar10 with η = 0.01 and b=64 we have that it is smaller than the gain in sharpness due to an epoch.ϵ = |ln(η)|/c =0.6.Log-barrier. Regarding the log-barrier phenomenon, if the optimizer has independent steps with a certain variance, we conjecture that we can apply a version of the results of <cit.> to figure out what the implicit regularization due to the higher-order terms of the Taylor is. In their settings, the effect of the noise is to implicitly add a regularizer to the loss. These regularizersexplode as λ_max approaches the edge of stability and, for label-noise injection <cit.>, it works as a log-barrier. Based on our experiments, we believe that this occurs: (i) when <ref> is not applicable, (ii) in scenarios where α_EoS > η^-1, (iii) when the regularizer directs the trajectory into areas of the parameter space that continue to experience progressive sharpening, and (iv) when the remaining terms in the regularizer's step are sufficiently strong and in the opposite direction, effectively canceling it out. However, this will be the focus of further studies. § THE NATURE OF THE EFFECTWe list and discuss here the nature of the effect and how different it is from other effects discovered in the literature. We discuss in the following subsection the two ingredients of the effect: dependency, and discretization. Later we discuss the fact that the effect is a drift, it is not a diffusion-powered effect. We conclude with remarks and observations on the regularizer.§.§ DiscretizationThe regularization term that we find <ref> is the expectation of the quantity in <ref>. <ref> is the result of computing the effect of the discretization, it is unrelated to the optimization problem or to the gradient flow itself.Assume the batches B_1, B_2, …, B_k ⊆ D are fixed. How far are the trajectories of mini-batch and full-batch GD after k steps of learning rate η?We expand k steps of SGD and GD with learning rate η at the beginning of the epoch, and we keep the terms in which ∇ L(·) appears to the power 1. We write the difference between this quantity in the case of SGD and the case of GD and we obtain the followingSet B_1, B_2, …, B_k fixed batches of the dataset D. The trajectory of mini-batch SGD deviates from the trajectory of the same number of steps k of full-batch GD with the same learning rate η of - η∑_i=1^k ∇ L(B_i)+ η^2 ∑_1 ≤ i < j ≤ k∇^2 L(B_j)[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i) -same terms but full-batch:B_j ↶ Dfor all j≤ kup to first order in η∇ L(B_i), i = 1, 2, …, k.More details about the computation are in the proof in <ref> and about the size of the error in <ref>. §.§ Dependency The deviation between trajectories is a sum of products of functions evaluated on different batches. Thus the expectation of the difference between the trajectories of a mini-batch SGD and GD is zero when the steps are:* Centered: every step is a random variable centered in the GD step. E.g., an unbiased estimator of it as for SGD with replacement or some centered Gaussian noise gets injected in the gradient. * Independent: the steps are random variables independent of each other. Indeed in that case, the expectations of those products are products of the expectations, in particular full-batch derivatives. This is not the case of SGD without replacement. Indeed here the batches are dependent as they are disjoint. The regularizer we obtain taking the expectation originates from this dependence. For this reason, <ref> applies when comparing SGD without replacement and SO and any algorithm with centered independent steps, such as GD, SGD with replacement, their label-noise-injected versions, etc. In particular, this regularization effect of SGD without replacement arises solely from the absence of the second assumption above: the independent steps assumption. Similarly, our analysis can be reworked to unveil the effect of every algorithm that does not satisfy any of the two assumptions above. Moreover, we conjecture that other analysis present in the literature that deal with centered independent noise can be applied on top of ours, to the de-biased trajectory. However, this will be the argument of future work. Example: small learning rate. As an example of the effect of the dependence, when we have two terms sampled without replacement, e.g., let us compute the expectations of all the terms of the sum in <ref> in which just∇^2 L(B_j) and ∇ L(B_i) for some j > i are taken:_j > i[∇^2 L(B_j) ∇ L(B_i)] = 1/n (n-1)∑_z_j, z_i ∈ Dz_i ≠ z_j∇^2 L(z_j) ∇ L(z_i).This expectation of the product is not the product of the expectations, indeed1/n(n-1)[ ∑_z ∈ D∇^2 L(z) ∑_w ∈ D∇ L(w) - ∑_z ∈ D∇^2 L(z) ∇ L(z)]and this is equal ton/n-1∇^2L∇ L- 1/n-1 _z ∈ D[∇^2 L(z) ∇ L(z)].That can be rewritten as∇^2 L∇ L- 1/2(n-1) ∇[ ∇ L(z) - ∇ L^2].By observing that the terms of this form are k(k-1)2 and that those are the only ones in which η appears to the power 2, we conclude the proof of what corresponds to the main result of <cit.>. This is enough to explain the trajectory in the setting in which we have a very small learning rate (η≪ 1/k) and a bounded Hessian. The proof of our result will follow from a generalization of this technique to compute expectations. §.§ Drift and DiffusionWhen analyzing random dynamics we usually have two factors that contribute to the dynamic: the drift and the diffusion terms. To make this more clear the reader can think of the example of a stochastic differential equation:dX_t=b(X_t) dt+σ(X_t) dB_t withB_tBrownian motion.The evolution of X_t is governed by the function b and the function σ, the effect of b(X_t) is clear, and it pushes deterministically in a precise deterministic direction, essentially of movement due to the drift∼∫_0^t b(X_s)ds∼O(t)when b is consistently O(1).While the effect of the other summand is to add noise to the trajectory of the size of σ in a random direction, so we have that movement due to the diffusion∼𝒩(0, ∫_0^t σ^2(X_s)ds ) ∼O(√(t))when σ is consistently O(1).We can deal with discrete random dynamics in the same way rewriting every step asX_i+1 =X_i + [step]_drift +( step - [step] )_diffusion.These two parts impact the trajectory in different ways, with different speeds. In particular, when we make use of tools such as concentration inequalities and CLT we are usually dealing with or bounding the effect of the diffusion of the process. When we make use of LLN we generally deal with the effect of drifts.The regularizer is a drift.The effect of SGD without replacement that we unveil is a drift-like effect, it is not an effect of the diffusion. This is a story about the absence of stochasticity that we thought was there, not about the effect of the noise. In particular, the effect we unveiled is not about a discrete version of the Fokker-Planck theory or another kind of effect of the diffusion part. What we are observing in <ref> is that there exists a bias that points in a certain direction, moreover, we reduce the size of the diffusion of a related amount.This observation is in line with several existent works, e.g., <cit.>. Indeed, they show that even in the case of diffusive independent noise at every step, as label nose injection or SGD with replacement, the implicit regularization effect may be due to drifts coming from the higher order terms in Taylor, not from diffusion-powered regularization.§.§ On the Shape of the RegularizerStatistical intuition. Around the manifold of minima where the full-batch gradient of the loss is nearly zero, the regularizer of <ref> can be rewritten in many forms. The first one gives a statistical intuition and consists of [ 1/n∑_z ∈ D∇ L(z)^⊤ S ∇ L(z) ]- [ 1/n∑_z ∈ D∇ L(z) ]^⊤ S [ 1/n∑_z ∈ D∇ L(z) ].that we can write in terms of expectations as[ ∇ L(z)^⊤ S ∇ L(z) ]- ∇ L^⊤ S ∇ L(z).This way of presenting the regularizers unveils the fact that, essentially, it is a variance but seen through a different lens, not all the directions are relevant the same as the matrices S_is stretch them. This stretching implies that we are making bigger steps to reduce the variance in the directions in which S_i is bigger, thus ∇^2 L is smaller. This alone suggests the heuristic that SGD strongly regularizes, by reducing the variance of the gradients, the model in the directions in which we have less information.Towards commutation. The quantity in <ref> is exactly the commuter of the operandsexpectation, and the PSD quadratic form q_S v ↦ v^⊤ S v, indeed, we can rewrite it as[ q_s(∇ L) ] - q_s ( [ ∇ L] ).The regularizer is thus pushing towards places where this is zero and thus q_S andcommute, implying that the noise is smaller shrink.This gives some intuitions on the geometrical nature of this regularizer. S here serves as a metric to measure the intensity of the noise and it is a landscape-dependent metric. The metric given by S depends on the curvature of the landscape at that point. Penalizing the trace. Another way to see the regularizer is by noticing that <ref> can be rewritten as𝐭𝐫𝐚𝐜𝐞( S ·_D(∇ L) ).This is probably more congenial to the machine learning community. Most of the empirical work noticed how SGD acts on traces of the Hessian <cit.> or of the Fisher Matrix <cit.>. This way to write down the regularizers immediately shows how SGD regularizes both and function as a bridge with those results.§.§ Observations on the RegularizerLinear scaling rule. A first important observation is that the step on the regularizer is exactly of size η/b. This aligns with what has been widely empirically observed. Starting with <cit.>, indeed, and continuing with, e.g., <cit.>, the community noticed that any regularization effect attributable to SGD empirically scales with that quantity η / b. The effective step size c = η k. Every time a covariance of two things appears in the formula for the regularizer the multiplying constant is c^2/2n. Indeed, there are k terms of size η, so there are η^2k2 covariances, each multiplied by -1/n coming from <ref>. E.g., to the zeroth order in cλ we have1/n[12c∇ L(·) ^⊤ c∇ L(.)]= c^2/2n [∇ L(·)^2]= η/b-1·c/2· [∇ L(·)^2].This appears also when dealing with the Hessian terms, indeed H is defined as c^2/2n times [∇^2 L(·)^2]-(∇^2 L )^2, see <ref>.Sanity checks. The c^2/2 that multiplies everything in <ref> is in reality c^2/2 = η^2 k(k-1)/2.So it cancels out if we are doing full-batch, as k=1.DependencyCovariance. The SGD trajectory is different from GD's only when the Hessians and the gradient for different batches are significantly different. Accordingly, the elements steering the difference between the two trajectories come up in the regularizer as:[(η∇^2L(·))^2]-(η∇^2L)^2 and ∇_θ( η∇_θ L(·) )This is natural since the dependence implies covariance terms to appear in the expectation of the product.§ COMPARISON BETWEEN SGDS A subtle long-standing question is whether the fact that batches are disjoint and not sampled i.i.d. does matter in practice. In other words, if the induced dependency is strong enough for the trajectories SGD without replacement and SGD with replacement or noised GD to get attracted to different minima. This is important to know because while SGD without replacement is much faster and widely used in practice, see <ref>, most of the theory developed is for GD or algorithms where the steps are independent, see <ref>. The question here thus is:Do the trajectories of SGD without and with replacement or the minima they get attracted to qualitatively differ?<ref> implies that the trajectories do significantly differ. SGD without replacement's regularization effect, indeed, is powered by the drift we found in <ref>, while SGD with replacement by the diffusion term or a bias arising from different elements.As a consequence, SGD without replacement (i) escapes many loss saddles and travels flat areas much faster, as already discussed in <ref>, (ii) converges to certain minima, as already shown in <ref>, (iii) and oscillates with a smaller variance than SGD with replacement.That said, the minima to which they converge could be the same, or as good, although they get there for different reasons, with different paths, and different speeds.§.§ Shape and Intensity of the Noise.We already showed in <ref> that SGD without replacement has a bias that allows it to travel many flat areas quickly, unlike SGD with replacement.We also already From a certain perspective, SGD without replacement has less noise than SGD with replacement. The randomcity of SGD with replacement the noise comes from reshuffling the whole dataset once an epoch, then we partition, not from sampling a new independent batch at every step as for SGD with replacement. To get a sense of this lower amount of noise, one can think that the last batch is deterministic, given the previous ones, unlike the case of SGD with replacement. Analogously, every batch but the first one is sampled with lower variance than the previous ones. This is clear also once plotting the trajectories, e.g., <ref> and<ref>.As heuristic assume for simplicity that c∇^2 L ≪ 1. Then we know that the size of the step on the regularizer is of size η/b-1c/4. The variance of one step can be approximately computed with <ref> and it is the same as SGD with replacement but multiplied by approximately 1 - 1/b. Thus, the size of the bias is increasing and the variance is decreasing in 1/b. Precisely, using without replacement makes the steps biased and decreases the variance.§.§ Towards Low (Weighted-)VarianceThe regularizer we found in <ref> penalizes the noise variance, through the lens of the matrix S:Regularizer =[(∇ L(z)-∇ L)^⊤ S (∇ L(z)-∇ L) ].Thus explicitly biasing the dynamics in areas where the noise variance is smaller. This implies that SGD without replacement not only has a smaller variance than SGD with replacement, but it biases the dynamics in areas where the variance of later steps is even smaller.This may be also the case of SGD with replacement, <cit.>, although due to a different effect known in the statistical physics community as thermophoresis and studied in diverse communities of mathematicians, see Fokker-Plank equations.§.§ The Bias of SGD with replacement<cit.> showed that also SGD with replacement has a bias, i.e., the deviation between trajectories of SGD with replacement and GD is non-zero. However, to find a term whose expectation is different from its full-batch version one has to look deeper into Taylor. Precisely, into terms of the kindη∇^3 L (B_i) [η∇ L (B_j)]^2,i > j.These terms fall into the approximation error correction of the bias of SGD without replacement, so, in particular, the bias effect of SGD with replacement is realistically orders of magnitude smaller. Let θ^* be any global minimizer of L. Let R satisfyR = (I - η∇^2 L(θ^*))R(I - η∇^2 L(θ^*)) + ηλ(∇ L(θ^*)).Then the vector field of the expected direction of our steps is∇̂L ≈∇[L + λ⟨∇^2 L(θ), R ⟩] with λ constant dependent on the learning rate. Thus, SGD with replacement minimizes the quantity ⟨∇^2 L(θ), R(ϑ) ⟩ (here you do not differentiate the R, only the ∇^2 L(θ)), within the manifold in which the loss L is minimized.§ BROAD APPLICABILITY AND LIMITATIONS The primary objective of this section is understanding technically how our work compares to other studies and to what extent the following question has been answered. Question:Can we characterize the minima to which SGD converges? How? We argue that from a certain point of view, we employ the minimal and most natural set of assumptions for such an analysis. We discuss the broad applicability of this approach in <ref>. We discuss the limitations of our findings in <ref>.§.§ Generality and Optimality of our ApproachAs previously discussed in <ref>, there are several reasons for conducting a local analysis.This is primarily due to the inapplicability of "global" methods such as stochastic approximation by Robbins and Monro. Also, there is a lack of understanding of the geometry of the manifolds.Essentially, conducting a local analysis means expanding in the Taylor series or a similar approximation technique.In this context, as explained in <ref>, a Taylor expansion over the k steps of an epoch can be represented as a sum of product of terms of the form:coefficient · η · k· function( ∇^i L(·, ·) )for some i ∈, some real coefficient, and a function that could be, e.g., an average over batches, and so on. For such Taylor expansions to converge and represent our trajectory, it is thus necessary that some of the terms above are small. As argued in <ref>, η k ∇ L appears in every term and it is the only such term that vanishes throughout the training.This happens because η is annealed and ∇ L converges to 0 approaching a stationary point. Moreover, no matter the size of the other derivatives, if η k ∇ L ≪ 1 the series generally converges. Consequently, for the Taylor series to converge, or analogously for a local analysis to be performed, requiring the assumption η∇ L ≪ 1 is the most natural of the possible necessary conditions.An analysis based solely on this assumption is thus highly versatile, arguably the broadest in scope among local analyses. Whenever one can use a local argument, one can expand in Taylor, in particular, generally η^>1· k ·∇ L ≪ 1. Thus our analysis characterizes the dynamics.Our approach differs from past community efforts that relied on stricter assumptions η k ≪ 1 <cit.>, or conditions involving other higher order derivatives. The only works that could relax these assumptions usually assume independence between single steps, e.g. <cit.>. We demonstrated that the inherent effect of SGD without replacement is due to the dependence. This means that that such results, while being still instrumental in addressing the effect of other algorithms, do not apply to our setting.§.§ Limitations: Broader Scope implies Weaker Results In the context of small learning rate <cit.> or independent steps <cit.> the results are mathematically elegant: we have a penalized loss on which GF or GD mirrors the trajectory of the SGD. Those works deliver insightful heuristics and theorems about where the trajectory leads and the generalization improvements. In our context, however, as discussed in <ref>, with η≪ 1 and dependent steps, we cannot explain convergence using those results and techniques.This limitation hampers our ability to establish the beautiful convergence results that mark the earlier studies. We will investigate this in future works, however, the absence of such strong convergence results would not be entirely surprising. It would align with the understanding that in many contexts there is not such a thing as a potential that the algorithms are minimizing. This has been shown for instance for gradient flow for ReLU networks by <cit.>. It has also been discovered by <cit.> and later made rigorous by <cit.> for the case of SGD, in which it is possible that SGD cycles. We can thus only hope to better characterize the trajectories. This is exactly what we manage to do with our technique.Our <ref> characterizes the path-dependent regularizer of the algorithm in analysis. In general, we were unable to describe our results, unlike previous work, as"SGD without replacement works as a specific (different) optimization algorithm on a modified loss". This, however, may not be a shortcoming of our analysis. Rather, it could underscore the nature of the effect, which proves more challenging to describe mathematically than many earlier hypotheses. That said, heuristically the meaning of <ref> is similar to working with a penalized loss: If the regularizer is not minimized it guides the trajectory towards areas where it decreases. Returning to the main question of this section, we could not precisely define the minima to which SGD converges, in contrast to previous studies that have achieved such a goal in different contexts. However, the task is likely unattainable in full generality. Identifying such specific cases where this is possible is beyond the scope of our current work. § CONCLUSIONSConclusion. We showed that SGD without replacement implicitly regularizes by biasing the dynamics towards areas with lower variance. This is due to the dependence of the steps and manifests as a drift-like effect, not attributable to diffusion or a Fokker-Planck-like argument. We demonstrate that this leads to a form of regularization, enabling the algorithm to navigate through flat areas more quickly and with fewer oscillations than expected.Future directions Our work, between others <cit.>, highlights the critical need to focus on understanding the geometry of the optimization landscape in machine learning.The regularizer we identified is highly dependent on the geometry of the landscape. It has become clear that the landscape's shape, particularly its curvature and higher-order derivatives, plays a more significant role than previously thought. This goes beyond just the noise within Stochastic Gradient Descent (SGD) or the effect of discretization. The way these geometrical aspects interact with algorithms like SGD leads to unique trajectories and convergence behaviors.Moving forward, we should explore how these geometrical features influence optimization, especially in non-linear models like neural networks, and induce the unexpected phenomena we see in learning. Understanding the interplay between the landscape's curvature and geometry in non-convex settings and algorithmic behavior is key. This approach promises to unveil deeper insights into the optimization process, and thus into what properties the neural networks used in practice have. Also, this article is about point-wise regularization effect throughout the trajectory. The regularization effect that we unveil is path-dependent. In the future, we need to investigate what this implies on the convergence.apalike toc§ THE VARIANCE OF SGD WITHOUT REPLACEMENT §.§ What we Know<cit.> computed the variance of one step of SGD without replacement. They showed that starting from the minimum of a strong convex objective function, the variance of one step of SGD is upper bounded by O(η c ∇^2 L), while for SGD with replacement, it is upper bounded by O(ηλ_max∇^2 L/λ_min∇^2 L). They also observed empirically that SGD without replacement in some strong convex settings has higher variance at initialization. However, they noted that the variance for SGD without replacement decreases more rapidly. After approximately O(1) epochs, its variance becomes lower than that of SGD with replacement, leading to faster convergence. They also have a result for non-convex scenarios; yet, it applies in the regime where cλ_max∇^2 L < 1/√(2b), which may be too restrictive for practical applications.Nevertheless, these findings alone help explain the smaller variance sizes observed in certain settings. In some cases indeed, as the dynamics converge, it gets trapped in invariant sets like those identified by <cit.>. Within these invariant sets, the dynamics might exhibit local strong convexity. This is the case for diagonal networks, for example, but typically not for general neural networks <cit.>. <cit.>. For instance, consider the toy model (a· b x - y)^2, where a,b,x,y ∈^+, of <ref>. In this model, the dynamics are driven to the hyperplane where a = b, either by the regularizer in SGD without replacement or by diffusion in SGD with replacement, and get trapped there. When restricted to the subspace a=b, the objective function becomes locally strongly convex, specifically (a^2 x - y)^2. If x,y>0, near the minimum at a = √(y)/√(x), oscillations are observed to be of size O(η^2) for SGD without replacement, compared to O(η) for SGD with replacement. Nevertheless, in practical scenarios where the Hessian has large or zero eigenvalues, these worst-case results from convex optimization may not fully explain the trajectory or the size of the variance. §.§ When the Variance is SmallerIt is easy to see that in the regime in which c∇^2L≪ 1 the update to the parameters resulting from an epoch of SGD without replacement is approximately deterministic; indeed,θ^SGD_k - θ= - η∑_i=1^k ∇ L (B_i)+O(c∇^2L)=-c∇ L+O(c∇^2L).This suggests already that: (i) We have a big variance only in the presence of big eigenvalues of the Hessian, and in case only in the direction of the eigenvectors of the big eigenvalues. And (ii) we can expect the variance of SGD without replacement to be smaller than the one of SGD with replacement when some of the key ingredients of the trajectory are small.Moreover, in the case in which our discretization explains the trajectory, i.e., η∇ L ≪ 1, the variance of SGD without replacement get shrunk by the addition of the regularizer in the term -[ trajectory without replacement]^2.We can thus conclude that the variance of the noise in one epoch of SGD without replacement is smaller than the one with replacement at least when (i) the Hessian c∇^2 L ≪ 1 is small, or (ii) the gradients c ∇ L ≪ 1 are small. And we can go even further, consider at any epoch of SGD the update in the direction of the eigenvector v_λ of the eigenvalue λ of the Hessian. * If cλ≪ 1, ⟨ v_λ, ∇ L ⟩≠ 0, the eigenvalue is part of the bulk of small ones, then SGD without replacement has very small variance in that direction, unlike SGD with replacement. The push in that direction, in first approximation, is due only to full-batch gradient descent and the regularizer.* For all λs, even the big ones, if ⟨ v_λ, ∇ L ⟩≪ 1, then SGD without replacement has a small variance in that direction, in particular, smaller than SGD with replacement. The push in that direction, in first approximation, is due only to the regularizer, and if ηλ > 1 to the oscillation of GD. § PRACTICAL EXAMPLE: THE SETTING FOR THE PLOTSThe setting of <ref> is the easiest possible neural network setting trainable with SGD. We have the f(θ, x) = θ_1·θ_2 · x, it is a linear shallow network with width, input, and output dimensions being 1. The Dataset consists of 3 points D = { (1,1), (1,2), (1,0) }, the chosen loss is the MSE.Precisely, the loss L(θ) = (θ_1·θ_2-1)^2 + (θ_1·θ_2 - 2)^2 + (θ_1·θ_2)^2 with initialization the white bullet θ = (1,6). The lowest norm global minimum is ± (1,1).This setting shares the same peculiarities of linear diagonal network scenarios. In particular, our experiments extends to bigger linear diagonal networks. Precisely, we have that, (i) the minimum with the widest valley is also the min-norm solution, (ii) the min-norm solution coincindes also with the global minimum that zeros out our regularizer, and (iii) the global minima have loss >0. The datapoints have the same x but different y. Thus the SGD noise never fades out and keeps pushing toward the direction of the regularizer. Moreover, this makes SGD with replacement coincide with a version of GD with label noise. This is not a bad thing as label noise has generally better and understood generalization capabilities <cit.>. The code that generated the figures can be found on https://github.com/PierBeneventano/SGD_without_replacementthis github repository.§ THE EFFECT OF MINI-BATCHING The goal of this section is to understand the role of the following two properties of SGD without replacement: * The fact that the mini-batches add up to the dataset.* The fact that once the mini batches are sampled we do not care about their order.The first property above is shared by the SGD without replacement and Shuffle Once, and the second by SGD with and without replacement. However, once we take the expectation over the possible outcome of the initial shuffling, the same holds for Shuffle Once. We argue that it is the fact that we do not care about the order that allows us to rewrite our quantity as a derivative of a penalization. This properties are not shared, e.g., by Adam. §.§ Fixed Batches SGD TrajectoryWe address here the following question: Question:Assume we sampled already the batches B_1, B_2, …, B_k (fixed) and we make k steps of SGD using them. How far are we from the GD trajectory?For the reasons described in <ref> and <ref>, we expand k steps of SGD and GD with learning rate η at the beginning of the epoch, and we keep the terms in which ∇ L(·) appears to the power 1. We write the difference between this quantity in the case of SGD and the case of GD and we obtain the followingThe deviation due to mini-batches from the trajectory of the GD is η^2 ∑_1 ≤ i < j ≤ k∇^2 L(B_j)[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i) -same but full-batch:B_j ↶ Dfor all j≤ epoch.up to first order in η k ∇ L(B_i), i = 1, 2, …, k. The error is of order O( η^2+ k^3 ∇ L^2 ).More details about the computation are in the proof in <ref>, more details about the size of the error in <ref>.§.§ The Implicit Bias of the OrderWe derived a formula for the deviation in the trajectory, which depends on the order in which we observe the batches.It makes thus sense to examine the expectation and standard deviation of this SGD discretization with respect to the GD trajectory due to the randomly uniform order of the batches. Assume we sampled already the batches B_1, B_2, …, B_k (fixed). How far do we go from the GD/GF trajectory, on average over the possible orders of the batches? To do it, we consider the following quantityη^2/2∇_θ∑_1 ≤ i < j ≤ k[∇^1 L(B_j) ]^⊤[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i)The derivative of this quantity in θ isη^2/2∑_1 ≤ i < j ≤ k∇^2 L(B_j) [ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i) ←<ref>!+ η^2/2∑_1 ≤ i < j ≤ k∇^2 L(B_i) [ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ] ^⊤ ∇ L (B_j) ←flipped i,j + η^2/2∑_1 ≤ i < j ≤ k∇ L(B_j) ^⊤[ ∇_θ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ] ^⊤ ∇ L (B_i). ←error term!In the first line we have half of the term in <ref>, and in the second line we have the same quantity but with the indices switched. The third line is a part of the error as constitute a term of second order in η∇ L. This is the key idea of the proof of the following theorem other than of the main theorem, <ref>. With an additive error of size smaller than O ( that ) · (η k∇ L + 1/k), we haveθ_k^SGDexpectation over order of batches=θ_k^GD - η∇_θ Regularizerwhere where the regularizer is η_order[ ∑_1 ≤ i < j ≤ k∇ L(B_j) ^⊤[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i) ] -same but full-batch:B_j ↶ Dfor all j≤ epoch.We can thus conclude that: (i) The reason why we can rewrite this deviation as the derivative of a potential is that we do not care about the order in which we see the batches. And (ii) this happens for every SGD technique once we sample the batches, irrespective of how we sampled them. § THE PHILOSOPHY OF THE ANALYSISWe briefly present here the philosophy and tools of our analysis and proof.Our analysis consists of two main phases: (i) writing down the low-order part of the deviation, noticing that it is the derivative of a quantity, and (ii) computing its moments. Precisely, we start by expanding in Taylor at the beginning of the epoch the shift in trajectory between the SGD of learning rate η with batches B_1, B_2, …, B_k and the GD or learning rate η· k. We do this by iteratively considering the shift after i steps with the GD with learning rate η· i on ∪_j=1^i B_j. Indeed, we find that at every step i the shift gets multiplied by I - η∇^2 L(B_i) and we add the terms η^2 ∇^2 L(B_i) ∑_i < j∇ L(B_j) (this will become the regularizer) and the higher-order terms of the Taylor will become the error. The part highlighted is the leading part in the case in which η∇ L(B_1), η∇ L(B_1) + η∇ L (B_2) ), …, η∑_j=1^k ∇ L(B_j) are small. Indeed those quantities appear with exponent 1 in the "regularizer" part and with exponent ≥ 2 in the part that we consider to be the error. It is important to notice that later we will weaken these assumptions. Precisely, if we take the expectation already of the Taylor expansion of the error and then we highlight this part, and not vice-versa, we will be able to just assume that the expectations are small. This will imply that the result works in the case in which η^1+ k ∇ L ≪ 1.We follow by noticing that by adding terms that are O(η k ∇ L)^2, that thus fall into the error part, this deviation between the trajectories coincides with the derivative of a quantity.Now by subtracting to what we obtained the analogous term but in the setting in which B_i = D for all i, i.e., the shift between small step and big step GD, we obtain our regularizer (still as a function of the batches, there are still no expectations involved).We will later conclude by computing the moments of these quantities arising from the Taylor series in the setting without replacement.In this setting, we develop a lemma that helps us compute the moments of the quantities in play. More details about this in <ref>. § DEALING WITH EXPECTATIONSTo prove our main result, we need to be able to compute all the expectations that we encounter. Those are taken over the SGD sampling procedure, i.e., sampling without replacement:[Sampling without replacement]Assume z_1, z_2, …, z_k are sampled in one of the following two equivalent ways.*{z_1, z_2, …, z_k} is a set of k different elements that has been uniformly sampled from the family of all the possible different subsets (without multiple copies) of cardinality k of a set D with cardinality n.*We sampled z_1 uniformly from D, z_2 uniformly from D \{z_1}, …, and z_k uniformly from D \{z_1, z_2, …, z_k-1}.All the expectations in what follows will be over this sampling procedure. Indeed, if we see a batch or a data point we will not see it until the end of the epoch.We will assume we have a sequence of functions that take in input elements of D and output a tensor such that the shapes of the tensors match and we can make a product. Precisely, for all i ∈{1, 2, …, k} we have f_iD →^n_i × n_i+1 where n_1, n_2, …, n_k+1∈.Precisely, in the following these fs will take a training data point z ∈ D in input and they will output a derivative of the loss in the parameters evaluated at that data point, e.g. f_1(z_1) = I - ∇_θ^2 L (θ, z_1).§.§ Lemma for Expectations Without ReplacementLet D = {z_j}_j ∈{1,2, …, n}⊆^d, let B_1, B_2, …, B_k batches of size b sampled without replacement from D, see <ref>. Let A ∈^d × d and denote by Z_i := 1b∑_z ∈ B_i z the average of the vectors in the i-th batch, by z̅ := 1/n∑_z ∈ Dz. Then_[without replacement][ ∑_i = 1,j^h Z_i^⊤ A Z_j ] - h z̅^⊤ A z̅ = h(n - hb)/b(n-1)( _z ∈ D[z^⊤ A z] - z̅ ^⊤ A z̅) .Denote by α := _z ∈ D z^⊤ A z - z̅ ^⊤ A z̅. Note that_[without replacement][ ∑_i,j = 1^h Z_i^⊤ A Z_j ]= _[without replacement][ ∑_1≤ i≠ j ≤ h Z_i^⊤ A Z_j +∑_1≤ i≤ h Z_i^⊤ A Z_i ]h(h-1) ( - α/n-1) +h ( b-1/bz̅^⊤ A z̅ - b-1/bα/n-1 + 1/b_z ∈ D [z^⊤ A z] )=h z̅^⊤ A z̅- 1/n-1( h^2 - h/b)α +h/bα =h z̅^⊤ A z̅- h^2b - hn/b(n-1)α . §.§ The Main LemmaAssume we are in <ref> above. Then, when S is sampled as in <ref> of <ref>, the expectation [ f_1(z_1) · f_2(z_2) ·…· f_k(z_k) ], is equal to[∏_i=0^k-1 (n-i)]^-1∑∏_i=1^k f_i(z_i)where, thus, the sum is taken over all the possible k-uple (z_1, z_2, …, z_k) of k different elements of D. By averaging over the sampling technique in <ref>, we obtain quantities like the following in the case of commutation of the f_is.[∏_i=0^k-1n/n-i] ∑_j=1^k∑_σ∈Π∏_all the possible0=i_0 < i_1 < … <i_j = k[-1/n]^i_j-i_j-1-1_z ∈ D[ ∏_h = i_j-1+1^i_j f_σ(h)(z) ].where Π is the set of all the possible permutations of {1,2, …, k}. That is the sum of all the possible products of expectations of products of functions. Each of the expectations of products of l functions is multiplied by [-1/n]^l-1. Another way to rewrite it is [∏_i=0^k n/n-i] multiplied by∏_i = 1^k [f_i] - n^-1∑_i ≠ j[f_i f_j] ∏_m≠ j,i[f_m]+ n^-2∑_i ≠ j ≠ h ≠ l[f_i f_j][f_h f_l] ∏_m≠ j,i,h,l[f_m]+ … Let us now prove what above, regarding the first part in <ref>, it follows directly from the definition of average. We prove the other thing by induction. Note that for k=1,2 we have the result in <ref> already (base case). Indeed, note that_D \{z} [f]= 1/n-1∑_x ∈ D \{z} f(x)= n/n-1[ _D [f] - f(z)/n]and by plugging this in_z_1 ∈ D[f_1(z_1) ·_z_2 ∈ D \{z_1} [f_2(z_2)]]= n/n-1_D [f_2] [f_1]- 1/n-1_D[f_1 f_2].Let us see the step k=3 to better explain what is happening_z_1 ∈ D[f_1(z_1) ·_z_2 ≠ z_3 ∈ D \{z_1} [f_2(z_2)f_3(z_3)]] = _z_1 ∈ D[ f_1(z_1) ·( n-1/n-2_D\{z_1} [f_2] _D\{z_1}[f_3]- 1/n-2_D\{z_1}[f_2 f_3])]= _z_1 ∈ D[ f_1(z_1) ·( n-1/n-2n/n-1[ _D [f_2] - f_2(z_1)/n] n/n-1[ _D [f_3] - f_3(z_1)/n] - 1/n-2[ n/n-1_D[f_2 f_3] - f_2(z_1)f_3(z_1)/n-1] ) ] .Next by noting that(n-1)f_2(z_1)/n-1f_3(z_1)/n-1 = f_2(z_1)f_3(z_1)/n-1we conclude that_z_1 ≠ z_2 ≠ z_3 ∈ D[f_1(z_1) f_2(z_2) f_3(z_3)] = n^3/n(n-1)(n-2)[f_1] [f_2] [f_3]- n^2/n(n-1)(n-2)([f_1 f_2][f_3] +[f_1][f_2 f_3] +[f_1 ·[f_2] · f_3 ] )+ 2 n/n(n-1)(n-2)[f_1f_2f_3] .More generally we will have all the possible products showing up with the multiplier (-n)^-order+1. Indeed or what regards the inductive step, note that by tower rule_D [ f_1(z_1) · f_2(z_2) ·…· f_k(z_k) ]= _z_1 ∈ D[ f_1(z_1)·_D\{z_1}[ f_2(z_2) ·…· f_k(z_k) ]]So the expectation for z_1, z_2,…, z_k ∈ D without replacement it is the same as the expectation over z_1 ∈ D of the expectation for z_2, z_3,…, z_k ∈ D\{z_1} without replacement. We will also prove thatNow, applying the inductive hypothesis for k ↶ k-1 and D ↶ D \{z} on _D\{z_1}[ f_2(z_2) ·…· f_k(z_k) ] we obtain that * The multiplier is ∏_i=0^k-1n/n-i.* That we have n^k ∏_i=1^k [f_i] - n^k-1∑_i<j[f_if_j] ∏_h≠ i,j[f_h] + … and in general appears every possible combination of expectations of products of the functions, such that when we are taking the expectations of l functions together, we have (-1/n)^l-1 multiplying the head coefficient. From now on we perform the computations in the case of commutations, but they would not change at all in case of no commutation. It is just more difficultLet us assume it works for D \{z_1} and k-1. Then the expectation in that setting is the following and satisfies the points above.[∏_i=1^k n-1/n-i][ ∏_i=2^k _D\{z_1}[f_i] - (n-1)^-1∑_i ≠ j_D\{z_1}[f_i f_j] ∏_m ≠ i,j_D\{z_1}[f_m] +…]Thus plugging this into <ref> we obtain that _D[ f_1(z_1) · f_2(z_2) ·…· f_k(z_k) ] is equal to_D[ f_1(z_1) ·[ (n-1)^k-1∏_i=2^k _D\{z_1}[f_i]-(n-1)^k-2∑_i ≠ j_D\{z_1}[f_i f_j] ∏_m ≠ i,j_D\{z_1}[f_m] ] ]multiplied by [∏_i=1^k 1/n-i]. Next, note that_D \{z} [f]= 1/n-1∑_x ∈ D \{z} f(x)= n/n-1[ _D [f] - f(z)/n].Applying this above, we obtain[∏_i=1^k 1/n-i] _D[ f_1(z_1) ·[ (n-1)^k-1∏_i=2^kn/n-1[[f_i]-f_i(z_1)/n]-(n-1)^k-2∑_2 ≤ i ≤ j ≤ kn/n-1[ [f_i f_j] - f_i(z_1)f_j(z_1)/n] ∏_n ≠ i,j,1n/n-1[[f_i]-f_i(z_1)/n] ]] = [∏_i=0^k 1/n-i] _D[ f_1(z_1) ·[ n^k-1∏_i=2^k[[f_i]-f_i(z_1)/n]- n^k-2∑_2 ≤ i ≤ j ≤ k[ [f_i f_j] - f_i(z_1)f_j(z_1)/n] ∏_h ≠ i,j,1[[f_h]-f_h(z_1)/n] ]]where the average is now taken over D. Note that for all i,j it holdsn^k-1 f_i(z_1)/nf_j(z_1)/n∏_h ≠ i,j, 1^k [f_h]= n^k-2 f_i(z_1)f_j(z_1)/n∏_h ≠ i,j, 1^k[f_h]and the same holds for all the other expectations of products of more functions. In general, the sign and power of n multiplying all the expectations of products of l terms is n^-l+1.Precisely, we obtain[ f_1(z_1) · f_2(z_2) ·…· f_k(z_k) ]= [∏_i=0^k 1/n-i] _D[ f_1(z_1) ·n^k[ ∏_j > 1[f_j] - ∑_i=2^k f_i(z_1)/n∏_j≠ 1,i[f_j]-n^k-1∑_2 ≤ i < j[ [f_i f_j] ∏_n ≠ i,j,1[[f_i]-f_i(z_1)/n]]+…]That is equal to the usual [∏_i=0^k 1/n-i] multiplied byn^k∏_i = 1^k [f_i] - n^k-1∑_i ≠ j[f_i f_j] ∏_m≠ j,i[f_m]+ n^k-2∑_i ≠ j ≠ h ≠ l[f_i f_j][f_h f_l] ∏_m≠ j,i,h,l[f_m]+ …This concludes the proof <ref>. When we are taking z_i=z_j for all the possible k2 couples (i,j), and multiplying by all picked alone. Then 12·k2·k-22 for the case of two couples and all the rest alone, etc. When we are taking m terms together, we can choose them in km so the size of the products in which those appear is much smaller precisely is about the size of the others but we divide by n^m-1 instead of m and those are not k(k-1)·…· (k-2m+1)/2^m but at the denominator there is m! so they are much less and much smaller. We will thus throw them in the error later.§.§ Corollary for big n and k Our setting is special: we can cancel out some terms because n,k ≫ 1.Assume then that we have 3 or more functions f_1, f_2, f_3, …. The terms that multiply the expectation of the product of 3 or more of them, e.g. _z ∈ D f(z)g(z)h(z) in the big sum <ref> can be thrown in the error part, indeedn^-2_z ∈ D[ f_1(z)f_2(z)f_3(z) ]∏_i=4^k _z ∈ D[ f_i(z) ]= O(n^-1)·n^-1_z ∈ D[ f_1(z)f_2(z) ]∏_i=3^k _z ∈ D[ f_i(z) ]if _z ∈ D[ f_1(z)f_2(z)f_3(z) ] = O (_z ∈ D[ f_1(z)f_2(z)] _z ∈ D[ f_3(z) ] ). Or analogously k3 n^-2_z ∈ D[ f_1(z)f_2(z)f_3(z) ]∏_i=4^k _z ∈ D[ f_i(z) ]= O(k^-1)·k4 n^-2_z ∈ D[ f_1(z)f_2(z) ] _z ∈ D[ f_3(z)f_4(z) ]∏_i=5^k _z ∈ D[ f_i(z) ]if _z ∈ D[ f_1(z)f_2(z)f_3(z) ] = O (_z ∈ D[ f_1(z)f_2(z) ] _z ∈ D[ f_3(z)f_4(z) ]).We can thus conclude with a corollary of <ref> by using this and induction that Assume the different averages over the dataset are of the same order, and n,k ≫ 1. Then Assume we are in <ref> above. Then, when S is sampled as in <ref> of <ref>, the expectation [ f_1(z_1) · f_2(z_2) ·…· f_k(z_k) ], up to a multiplicative error of size O(1/n), is equal to ∏_i=0^k n/n-i multiplied by∏_i = 1^k [f_i] - n^-1∑_i ≠ j[f_i f_j] ∏_m≠ j,i[f_m]+ n^-2∑_i ≠ j ≠ h ≠ l[f_i f_j][f_h f_l] ∏_m≠ j,i,h,l[f_m]+ …where only the terms in which the expectations of products of up to 2 elements appear are taken. §.§ Corollary about CovariancesSecondly, note that we can rewrite this in terms of covariances, not of expectations of products, indeed, for every term we have the analogousn^-1_z ∈ D[ f_1(z)f_2(z) ]∏_i=3^k _z ∈ D[ f_i(z) ] -n^-1∏_i=1^k _z ∈ D[ f_i(z) ] =n^-1_z ∈ D( f_1(z), f_2(z) )∏_i=3^k _z ∈ D[ f_i(z) ]and analogously for a higher number of expectations of products of termsn^-2_z ∈ D[ f_1(z)f_2(z) ]_z ∈ D[ f_3(z)f_4(z) ]∏_i=5^k _z ∈ D[ f_i(z) ] -n^-2_z ∈ D[ f_1(z)f_2(z) ]∏_i=3^k _z ∈ D[ f_i(z) ] =n^-2_z ∈ D[ f_1(z)f_2(z) ] _z ∈ D( f_3(z), f_4(z) )∏_i=5^k _z ∈ D[ f_i(z) ]andn^-2_z ∈ D[ f_1(z)f_2(z) ] _z ∈ D( f_3(z), f_4(z) )∏_i=5^k _z ∈ D[ f_i(z) ] -n^-2_z ∈ D( f_3(z), f_4(z) )∏_i ≠ 3,4_z ∈ D[ f_i(z) ] =n^-2_z ∈ D( f_1(z), f_2(z) )_z ∈ D( f_3(z),f_4(z) )∏_i=5^k _z ∈ D[ f_i(z) ]This and induction prove that Assume the different averages over the dataset are of the same order, and n,k ≫ 1. Then Assume we are in <ref> above. Then, when S is sampled as in <ref> of <ref>, the expectation [ f_1(z_1) · f_2(z_2) ·…· f_k(z_k) ], up to a multiplicative error of size O(1/n), is equal to∏_i = 1^k [f_i]-(n-1)^-1∑_i ≠ j(f_i, f_j) ∏_m≠ j,i[f_m] + ((n-1)(n-2))^-1∑_i ≠ j ≠ h ≠ l(f_i, f_j) (f_h, f_l)∏_m≠ j,i,h,l[f_m]+…where only the terms in which the expectations of products of up to 2 elements appear are taken.To conclude the proof just note that the constant ∏_i=0^k n/n-icoincides with∑_i=0^k/2[∏_j=0^ik-2j2] (-n^-1)^iand contains the following terms1+k2 (-n^-1)_ covariances +k2k-22 (-n^-1)^2_the double covariances +…and for all i there are exactly ∏_j=0^ik-2j2 terms with i couples taken together in the expectation, and those terms are multiplied by (-n^-1)^i.§ EXPANDING IN TAYLOR THE TRAJECTORIES §.§ NotationsFirst, we recall the notations from <ref>. We denote by D = { z_i }_i = 1, 2, …, n⊆ V the training set of size n. We have a parametric loss function L(ϑ, z) ↦ L(ϑ, x) ∈ that takes as input the parameters θ∈Θ and the data z ⊆ V and outputs a real number, admitting 3 weak derivatives in the parameters ϑ. For every set B ⊆ V we define L(ϑ, B) := 1|B|∑_z ∈ B L(ϑ, z). The goal of the optimization procedure is to find a minimum θ^* ∈Θ for the objective function L(ϑ, D).For readability, we will often omit the inputs of the function L. Precisely, whenever we omit the parameter ϑ we are evaluating the loss at the beginning of the epoch, we will denote the value of the parameters at the beginning of the epoch with θ. Whenever we omit the set of inputs we are evaluating the loss over the whole training set D. Moreover, all the derivatives we will take will be in the parameters ϑ and all the expectations will be empirical expectations over a set B ⊆ V. As an exampleLmeans L(θ, D) ∇ L(B)means ∇_θ L(θ, B). Moreover, we will denote by θ_i^SGD the parameters after the i-th step of SGD without replacement or, analogously, of Shuffle Once starting from θ_0^SGD = θ with learning rate η > 0. In this section, we will take also other algorithms in exam, such as SGD with replacement, GD, or GD with noise injected. Analogously to SGD without replacement, we will denote by θ_i the parameters after the i-th step of the other algorithms in exam, with initialization θ_0 = θ and the learning rate η > 0.Let b ∈ be the batch size and k ∈ the number of optimization steps considered at once. Generally, k ≤ n/b will be, and can be thought of as, the number of batches or the number of steps in an epoch. We denote by B_i ⊆ D the batch used in the i-th step, i ≤ k. Denote by c:= η k and by H : = c^2/2n( [(∇^2 L)^2] - (∇^2 L)^2 ). Finally, we denote by A^† the Moore-Penrose inverse of the matrix A and, for a PSD matrix S, we denote v_S^2 := v^⊤ S v. For a vector or matrix we denote the components with the indices in subscripts, e.g., for A ∈^d× d, for i,j ≤ d, we denote A_i,j the (i,j) component of the matrix A.§.§ Computing the Deviation between SGD and Big-Learning-Rate GD.In the following subsections of this section <ref> we prove <ref>. This is a first step towards the proof of <ref>. Let us write down the difference between SGD and a GD with different learning rate. Precisely we compare here θ_k^SGD, so the parameters after k steps of SGD with learning rate η and batches B_1,B_2, …, B_k and θ_1^GD, kη, so the parameters after one step of GD with learning rate kη on the dataset D_k := ∪_i=1^k B_i. Precisely, note that:θ_k^SGD, η - θ_1^GD, kη=θ_k-1^SGD, η- η∇ L(θ_k-1^SGD, η , B_k)- θ_1^GD, (k-1)η + η∇ L(θ, B_k).So defining this deviation Δ as below, we have thatΔ_k^SGD: =θ_k^SGD, η - θ_1^GD, kη= Δ_k-1^SGD - η∇ L(θ_k-1^SGD, η, B_k)+ η∇ L(θ, B_k).Next, expanding in Taylor centered in θ we haveΔ_k^SGD = Δ_k-1^SGD - η∇^2 L( B_k) [θ_k-1^SGD, η -θ] + higher order Taylor rest= Δ_k-1^SGD-η∇^2 L( B_k) [θ_1^GD, (k-1)η - θ + Δ_k-1^SGD] +higher order Taylor rest.That is equal to[I - η∇^2 L( B_k)] Δ_k-1^SGD + η^2 ∇^2 L( B_k) [∑_i=1^k-1∇ L(B_i) ]_ =θ_1^GD, η (k-1) - θ + higher order Taylor rest. We now consider the lowest order terms, let us define thus α_0^SGD = 0, and for all k ∈ denote α_k^SGD = [I - η∇^2 L( B_k)]α_k-1^SGD + η^2 ∇^2 L( B_k) ∑_i=1^k-1∇ L(B_i). Thus we obtain α_k^SGD = η^2 ∑_i=2^k[ [∏_j=i+1^k [I - η∇^2 L(B_j) ] ] ∇^2 L(B_i) [ ∑_j=1^i-1∇ L(B_j) ] ].Next note that defining analogously α_k^GD in the case of B_i = D, for all i, we have the difference between the GD with small and big learning rates. §.§ Reorganizing the terms, 1 To conclude this part,the quantity in <ref> can be rewritten as η^2 ∑_i=2^k[ [∏_j=i+1^k [I - η∇^2 L(B_j) ] ] ∇^2 L(B_i) [ ∑_j=1^i-1∇ L(B_j) ] ]= η^2 ∑_1 ≤ i < j ≤ k[ ∑_l < k-j∑_all the possible j<h_1<h_2<… <h_l≤ k∏_m=1^l (-η∇^2 L(B_h_m)) ]∇^2 L(B_j) ∇ L (B_i).In particular, for all the batches B_i, in the sum above we have ∇ L(B_i) multiplied by any possible product of k-i-1 terms that are for all j all the I or ∇^2 L(B_j). To be precise, all but the product of all identities. This can be rewritten by grouping starting from j being the biggest non-identity instead of the smallest one, asη^2 ∑_1 ≤ i < j ≤ k∇^2 L(B_j)[ ∑_l < j-i∑_all the possible i<h_1<h_2<… <h_l<j∏_m=1^l (-η∇^2 L(B_h_m)) ]∇ L (B_i)= η^2 ∑_1 ≤ i < j ≤ k∇^2 L(B_j)[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i).§.§ Reorganizing the Terms, 2Thus the quantity above can be rewritten in 3 ways: The first we have seen is <ref>η^2 ∑_i=2^k[ [∏_j=i+1^k [I - η∇^2 L(B_j) ] ] ∇^2 L(B_i) [ ∑_j=1^i-1∇ L(B_j) ] ].The second is η^2 ∑_1 ≤ i < j ≤ k∇^2 L(B_j)[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i).Finally, let us expand it to products of Hessians instead of I- η∇^2 L. Note that we will work on expectation later, any expectation we will work with does not change with re-ordering, thus every moment of the quantity above will correspond to moments of - ∑_i = 2^kki[∏_j=2^i [-η∇^2 L(B_j)]]η∇ L (B_1). All this concludes the proof of <ref>. § PROOF OF <REF> AND <REF> The first two subsections serve as proof of <ref>. Later we proceed with the proof of <ref>.Note, however, that if D ={B_1, B_2, …, B_k}, by taking n=k and calling B_i every z_i in the notations of <ref>, we obtain the setting of the already sampled batches. From this point of view <ref> is a particular case of <ref>. We will prove just <ref> without loss of generality. §.§ (Step 1) The Summands and the Potential Note that every summand of <ref> is∇^2 L(B_j)[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i).Moreover, we have one for every 1 ≤ i < j ≤ k. Note that expanding all the sums (remembering that ∇_θ^k L(B) = 1/|B|∑_z ∈ B∇_θ^k L(θ, z)) we have that( ∇^2 L(B_j)[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]∇ L (B_i) ) =_[z_i, z_i+1, …, z_j all different]( ∇_θ^2 L(z_j) [∏_h=i+1^j-1 [I - η∇_θ^2 L(z_h)]] ∇_θ L(z_i) ) =_[z_i, z_i+1, …, z_j all different]( ∇_θ^2 L(z_i) [∏_h=i+1^j-1 [I - η∇_θ^2 L(z_h)]] ∇_θ L(z_j) ).Where the second equality comes from the fact that we are averaging over the orders, so the order does not matter. Note that in the line 2 and 3 above we have the expectation of the elements in line 1 and 2 of <ref>. The Potential. As in <ref>, let us define the function V_i,j^d → as follows V_i,j(θ) := [∇_θ L(B_j)]^⊤[∏_h=i+1^j-1 [I - η∇_θ^2 L(B_h)]] ∇_θ L(B_i) .Note that, analogously as in the paragraph above, [V_i,j(θ)] is the expectation of[∇_θ L(z_j)]^⊤[∏_h=i+1^j-1 [I - η∇_θ^2 L(z_h)]] ∇_θ L(z_i)where z_i, z_i+1, …, z_j∈ D are all different. Note that the expectation is over a uniform distribution over a finite set, so we can exchange differentiation in θ and expectation over x ∈ D.The derivative in θ of [V_i,j(θ)] as already hinted is_[z_i, z_i+1, …, z_j all different]( ∇_θ^2 L(z_j) [∏_h=i+1^j-1 [I - η∇_θ^2 L(z_h)]] ∇_θ L(z_i) ) + _[z_i, z_i+1, …, z_j all different]( ∇_θ^2 L(z_i) [∏_h=i+1^j-1 [I - η∇_θ^2 L(z_h)]] ∇_θ L(z_j) ) - _[z_i, z_i+1, …, z_j all different]( ∑_h = i+1^j-1 [∇_θ L(z_j)]^⊤[∏_l=h+1^j-1 [I - η∇_θ^2 L(z_l)]] η∇^3_θ L(z_h)[∏_l=i+1^h-1 [I - η∇_θ^2 L(z_l)]] ∇_θ L(z_i) )Here the first two lines are simply the same expectation as seen above, and the second is part of the terms in the sum in <ref>. Since for the expectation the only thing that matters is how many elements we are multiplying, we can write the expectation of the term in <ref> asη^2/2∇_θ( ∑_i<j_[over the sampling of batches] V_i,j(θ) )= η^2/2∇_θ( ∑_1 < i ≤ k (k-i+1) _[over the sampling of batches] V_1,i(θ) )minus the term where we exchange every ∇ L(B_i) with ∇ L and ∇^2 L(B_i) with ∇^2 L. This that if the mini-batches add up to the dataset, every epoch of SGD corresponds to the same number of steps of GD plus an additional step of the regularizer given by this term, precisely by observing thatη^2/2∑_1 ≤ i < j ≤ k∇ L^⊤ [I-η∇^2 L]^j-i-1∇ L (B_i)= η^2/2∇ L^⊤ [∇^2 L]^2†( exp(-η k ∇^2 L) + η k ∇^2 L - I ) ∇ Lwe conclude thatReg= 1/2∑_1 ≤ i < j ≤ kη∇ L(B_j) ^⊤[ ∏_h=i+1^j-1 [I-η∇^2 L(B_h)] ]η∇ L (B_i)- ∇ L^⊤ [∇^2 L]^2†( [I - η∇^2 L]^k + η k ∇^2 L - I ) ∇ L. §.§ (Step 2) Taking the Expectation of the RegularizerWe compute now the expectations of the step on the regularizer in <ref>. Recall from <ref> and from <ref> that we can write the expectation as the non-commutative version of (we copy here the commutative version for clarity)∏_i = 1^k [f_i]-(n-1)^-1∑_i ≠ j(f_i, f_j) ∏_m≠ j,i[f_m] + ((n-1)(n-2))^-1∑_i ≠ j ≠ h ≠ l(f_i, f_j) (f_h, f_l)∏_m≠ j,i,h,l[f_m]+…First, note that the part we are removing due to GD is exactly∑∏[f_i] = ∇ L^⊤ [∇^2 L]^2†( exp(-η k ∇^2 L) + η k ∇^2 L - I ) ∇ L _GD partbecause the expectation of every derivative is the full-batch derivative. Let's analyze the step on the regularizer in the form <ref>. Precisely, we have to compute the following expectation-_[B_1, B_2, …, B_z all disjoint][ ∑_i = 2^kki[∏_j=2^i [-η∇^2 L(B_j)]]η∇ L (B_1) ]. Where f_1 = ∇ L and f_i = - η∇^2 L for all i > 1. We split this sum into 2 parts. * The terms in which f_1 and f_i are taken together in the covariances. Here, every summand of the quantity above can be rewritten as_B_1 ⊆ D[ _[B_2, …, B_j-1, B_j+1, …, B_i all disjoing and disjoint from B_1][ ∑_j = 2^i[∏_h=j+1^i [-η∇^2 L(B_j)]]η∇^2 L (B_1) ·[∏_h=2^j-1 [-η∇^2 L(B_j)]] η∇ L (B_1) ] ]- ….We will better analyze this term in what follows. * The terms in which f_1, appear alone (so we have _z ∈ D [f_1(z)] appearing in the formula are terms in which figures the covariance of at least 2 Hessians. These terms disappear around a stationary point, as _z ∈ D [f_1(z)] = η∇ L ∼ 0 there.-_[z_2, …, z_i all different][ ∑_i = 2^kki[∏_j=2^i [-η∇^2 L(B_j)]]]· η∇ L. §.§ (Step 3) Summing up.The term in item (i) above can be rewritten as follows_z_1 ∈ D[ _[z_2, …, z_j-1, z_j+1, …, z_i all different in D][ ∑_i = 2^kki∑_j = 2^i[∏_h=j+1^i [-η∇^2 L(B_j)]]η∇^2 L (B_1) ·[∏_h=2^j-1 [-η∇^2 L(B_j)]] η∇ L (B_1) ] ]- ….We can rewrite the quantity above as follows∑_i = 2^kki∑_j = 2^i_[z_j+1, …, z_i all different in D][∏_h=j+1^i [-η∇^2 L(B_j)]]_z_1 ∈ D[ η∇^2 L (B_1) ·_[z_2, …, z_j-1 all different in D][∏_h=2^j-1 [-η∇^2 L(B_j)]] η∇ L (B_1) ]- ….By observing that _z ∈ D(∇^2 L(z), ∇ L(z)) = 1/2∇_z ∈ D[ ∇ L(z) - L^2]We can reorganize the terms as followsRegstep=- c/n-k∇_θ∑_i = 0^k-2_[z_1, …, z_i all different in D][∏_j=1^i [-η∇^2 L(B_j)]]_z ∈ D (∇ L (z) - ∇ L^2_S_i )where S_i is defined asS_i := η^2/2c∑_j=0^k-i-2ki+2_[z_1, …, z_j all different in D][∏_j=1^k-i-2 [-η∇^2 L(B_j)]]Precisely, the part ∏_i_z ∈ D[f_i(z)] in the notations of <ref> of this quantity isS_i= η^2/2c [-η∇^2 L]^- i - 2 ( [I - η∇^2 L]^k - ∑_j=0^i+1kj (-η∇^2 L )^j ).This means that around stationary points and in the case in which [ (η∇^2 L)^2] - [ η∇^2 L]^2 ≪ n (usually as they are O(1) ) We haveRegstep= - η/b-1 ∇_θ∑_i = 0^k-2 (-η∇^2 L )^i_z ∈ D (∇ L (z) - ∇ L^2_S_i ). §.§ (Step 4) The Size of the Step Given the HessianNote that in the case in which [ (η∇^2 L)^2] - [ η∇^2 L]^2 ≪ n, that happens often as usually as they are O(1)S_i = η^2/2c [-η∇^2 L]^- i - 2 ( [I - η∇^2 L]^k - ∑_j=0^i+1kj (-η∇^2L)^j ) =η^2/2c∑_j=i+2^kkj (-η∇^2L)^j-i-2This matrix is a function of the Hessian and on the span ofeigenvectors of the eigenvalue λ of the Hessian, S_i takes the following values approximatelyS_i= η^2/2cki+2 - ki+3ηλwhen |cλ| ≪ 1ki+1(ηλ)^-1 - ki(ηλ)^-2 + …when cλ≫ 0 (-ηλ)^-i-2exp(-c λ) whencλ≪ 0 This concludes the proof of <ref>. § A DEEPER LOOK INTO S_0 OF <REF>Our goal in the rest of this section is to describe the nature of this regularizer and in particular the role of the matrices S_i, i ≤ k-2, in driving qualitatively different behaviors between SGD with and without replacement.Recall from <ref> that c:= η k and H : = c^2/2n( [(∇^2 L)^2] - (∇^2 L)^2 ).In the setting of <ref> we have that S_i approximately share eigenspaces with ∇^2 L. More precisely,S_i= η^2/2c∑_j=i+2^k kj[-η∇^2 L]^j-i-2 = η^2/2c[-η∇^2 L]^-i-2([I-η∇^2 L]^k - ∑_j=0^i-1kj[-η∇^2 L]^j).Thus, on the eigenspace of the eigenvector λ of the Hessian, we have thatS_0∼c/4 H^-1/2√(π) erf(√(H)) ( 2+ 2H^-1/2(e^-H-1) - cλ)if cλ small(2λ)^-1 if cλ >> 012λ^2exp(-cλ) (e^-H -4H/cλ)if cλ << 0In particular, when cλ is small we have up to O(c^2λ^2) thatS_0∼c/4· 1 - 1/3cλ - 1/6H + 1/10cλ H + 1/30H^2 - 1/42cλ H^2ifHsmallH^-1exp(-H) ( 2H^-1 - cλ)ifHbigand for H ≪ 1 we have up to O(H) approximatelycS_i∼1/2ki+2 - ki+3ηλ if cλ small λ^-1 if cλ >> 0λ^-2exp(-cλ)if cλ << 0In addition, 𝒮_i in <ref> is approximately the following𝒮_i=[I-η∇^2L]^2†H S_i. Moreover, the non-diagonal part of S_i only depends on H so in particular, if H≪ 1 then S_i is essentially diagonal and precisely, if λ_1, λ_2, …, λ_p are the eigenvalues of ∇^2 L = diag(λ_1, λ_2, …, λ_p) we have thatS_i=diag ( f_i(λ_1), f_i(λ_2), …, f_i(λ_p) )+O(H)with f_i(λ)= 12λ^-2([1 - ηλ]^k - ∑_j=0^i+1kj (- ηλ)^j ) ·(1 + O(1/k) ),or analogouslyf_i(λ)= 12∑_j=i+2^kkj (- ηλ)^j-2·(1 + O(1/k) ),S_0 and the small Hassian regime. Note that S_0 <ref> is∑_i=2^k(k-i+1)_[z_2, …, z_i-1 all different][ ∏_h=2^i-1 [I - η∇_θ^2 L(z_h)] ]Again using the<ref> we can write the expectation as the non-commutative version of (we write here the commutative version for clarity) ∏_j=1^i-1[1 + j/n-j] that multiplies[I-η∇^2 L]^i-2 - n^-1i-22([(I - η∇^2 L)^2] - [I - η∇^2 L]^2)[I-η∇^2 L]^i-4 + …So calling X:= [(∇^2 L)^2] - [∇^2 L]^2, and Y:= I-η∇^2 L we haveY^i-2- η^2 n^-1i-22 XY^i-4 + η^4 n^-212i-22i-42 X^2Y^i-6 + …Precisely, it isY^i-2- η^2 n^-1( X Y^i-4 + Y X Y^i-3 +Y^2 X Y^i-2 + … + η^2 X̃ Y^i-3 + … and so on. We can anyways count how many terms of every kind we have, precisely in the commutative caseS_0= η/2k∑_i=2^k (k-i+1) ( Y^i-2- η^2 n^-1i-22 XY^i-4 + …) When H ≪ 1. Then we can consider And Ỹ := ∑_i=2^k (k-i+1) Y^i-2 is∑_i=2^k (k-i+1) Y^i-2= -I + k - k Y + Y^k/(Y-I)^2= [I -η∇^2 L ]^k + η k ∇^2 L - I/η^2 [∇^2 L]^2That on the eigenspace of λ eigenvalue of the Hessian, for η k = c this is aboutexp(-cλ) + cλ - I/η^2 λ^2And if we expand in Taylor the exponential, we obtain, agreeing with <cit.>, thatI - cλ + c^2λ^22 - c^3λ^36 + c^4λ^4 /24 + c λ - I/λ^2 = c^2 /2 - c^3λ/6 + c^4λ^2/24In particular,η^2 Ỹ∼exp(-cλ)if cλ << 0 c^2/2 -c^3λ/6 if cλ∼ 0cλ^-1 if cλ >> 0When cλ << 1, whatever X/n. Now regarding the terms with also X, we have that in the case of commutation the sum isη^4 X 1/n∑_i=4^k (k-i+1) i-22 Y^i-4That happens to be, if ηλ << 1η^4/nk4 Xand otherwise1/n Xλ^-4( -3 + c λ + 12 (1 - ηλ)^k-2 [c^2 λ^2 - 5 η c λ^2 + 4 c λ + 6 η^2 λ^2 - 12 ηλ + 6] )This, when ηλ << 1 is1/nλ^-4X ( -3 + cλ + 12exp(-cλ) [(cλ+ 2)^2 + 2 ] ) =1/nX ( c^4/24 -c^5λ/40 +c^6λ^2/120 + O(c^2iλ^i-2 / i! ) )We can here compute the size of this term as for Ỹ. precisely the size is1/n([(∇^2L)^2]-[∇^2 L]^2) × > exp(-cλ)if λ << 0 c^4/4! -3 c^5λ/5! +5 c^6λ^2/6! if λ∼ 0< cλ^-3 if λ >> 0More generally,the term that multiply the X^is are:η^4∑_i=2^k (k-i+1)(I-ηλ)^i-2 = c^2/2 - c^3λ/6 +O(λ^2)that is equal toλ^-2( exp(-cλ) + cλ - 1 )for i=0. Recall H : = c^2/2nX, then the sum above isη^4∑_i=4^k (k-i+1)(i-2)!/(i-4)! · 2(I-ηλ)^i-4X/n = c^2/12H- c^3λ/20H+O(λ^2H)that is equal toλ^-4 H ( - 3 + c λ + 12exp(-cλ) [c^2 λ^2 + 4 c λ + 6] )for i = 1, thisη^6∑_i=6^k (k-i+1)(i-2)!/(i-6)! · 8(I-ηλ)^i-6X^2/n^2 = c^2/60H^2 - c^3λ/84H^2+O(λ^2H^2)that is equal toλ^-6 H^2 (-15 + 3cλ + exp(-cλ) (15 + 12 c λ + 4.5 c^2 λ^2 + c^3 λ^3 + c^4λ^4/8) )for i = 2, thisη^8∑_i=8^k (k-i+1) (i-2)!/(i-8)! · 6 · 8 (I-ηλ)^i-8X^3/n^3 = c^2/336H^3-c^3λ/432 H^3+O(λ^2H^3)that is equal toλ^-8 H^3 (-105 + 15cλ + exp(-cλ) (105 + 90c λ + 752 c^2 λ^2 + … + c^5λ^5/4 + c^6λ^6/48 )) )for i = 3, and so on. Thus we have that for λ << 1 or η << 1, the zeroth order in λ of S_0 is:c^2/2( 1 - 1/6H + 1/30H^2 - 1/168H^3 + …).Here, at the denominators we have (n+1)! · (2n+1) that is OEIS A175925. So we havec^2/2∑_m = 0^k/2 - 11/(m+1)! (2m +1) (-H)^m ∼c^2/2H^-1( √(π H) erf (√(H)) + e^-H -1 )That for H →∞ goes to 0 as c^2/2(H^-2 + O(H^-3) )e^-H. The first order in λ instead is -c λ that multipliesc^2/2(1/3 - 1/10H + 1/42H^2 - 1/216H^3 + …)precisely -c λ that multiplies terms that at the denominator have n! · (2n+1), that is OEIS A007680,precisely-c λc^2/2∑_m = 0^k/2 - 11/m! (2m +1) (-H)^m ∼ -c^3λ/4√(π) H^-1/2 erf( √(H))That for H →∞ goes to 0 as -c^3λ/4(H^-1 + O(H^-2) )e^-H.Thus we can conclude that for cλ << 1, by recalling H : = c^2/2nX we have S = c/4· 1 - 1/3cλ - 1/6H + 1/10cλ H + 1/30H^2 - 1/42cλ H^2ifHsmallH^-1exp(-H) ( 2H^-1 - cλ)ifHbigand in generalS = c/4 H^-1/2√(π) erf(√(H)) ( 2+ 2H^-1/2(e^-H-1) - cλ)with an error of O(c^2λ^2).If cλ≠ 0. We sum all the elements in <ref> multiplied by X alone, then all the ones by X^2 alone, then all the ones by all the other possible expectations of products, and so on. All these sums are sums of powers of λ, precisely all of them, as <ref> are a negative power of λ multiplied by some polynomial terms in cλ, usually a term that is O(1) and one that is O(cλ), and another one that is O(max{c^iλ^i, 1}) exp(-cλ). Precisely, all these terms for i ≥ 1 are [-X/n]^i multiplied byλ^-2i-2(-a_i + b_i · cλ + exp(-cλ) (a_i + (a_i-b_i) c λ + O(c^2λ^2 + c^2i-2λ^2i-2) + c^2iλ^2i/2^ii!))where a_i = (2i+1)!!b_i = (2i-1)!!.If λ << 0. This and the equation in <ref> implyλ << 0 we have the exponential in λ that governs the terms. Moreover, since λ^2 >> 1, we have that the biggest terms are the ones divided by λ^2 to a smaller exponent. Recall that H:= c^2X/2n, and noting that generally X/n = O(1/n). So in this case 2H/c^2λ^2 = X/λ^2n = O(λ^2 n)^-1≪ 1. We obtain that all the terms that sum the exponential have size smaller than the last one. So in particular the size of S_0 in that direction is aboutλ^-2exp(-cλ) ( 1+∑_i=1 a_i[-H/c^2λ^2]^i + c λ (a_i-b_i)[-H/c^2λ^2]^i + … + (-H)^i/i!)that is λ^-2exp(-cλ) (e^-H - 6(cλ)^-2H - 4(cλ)^-1 H + O(H^2/λ) )and can be rewrittenλ^-2exp(-cλ) (e^-H -4(cλ)^-1 H + O(H^2/cλ) ). If λ >> 0. Analogously in this case, the term that governs the quantity isλ^-2i-2 (- a_i + c λ b_i) [-X/n]^ias the exponential eats all the other terms. Thus we have the following seriesλ^-2∑_i=0^k/2-1 -a_i[-H/c^2λ^2]^i + c λ b_i[-H/c^2λ^2]^iAnd unlike the previous one, the biggest terms in the case in which λ >> 0 remain c/λ + O(λ^-2)where the error is always negative and so in particular it is < c/λ. This concludes the proof. § THE WHOLE REGULARIZER: <REF> Assume H ≪1.We proved that every epoch SGD without replacement, in expectation is GD plus one additional step on a regularizer in <ref>. Let us now rewrite everything in an orthonormal basis of eigenvectors for the Hessian. We have that the update on θ_i eigenvector of the i-th eigenvalue λ_i of the Hessian ∇^2 L is- η/b-1∑_j = 0^k-2 (-ηλ_i )^j ∇_θ_i_z ∈ D (∇ L (z) - ∇ L^2_S_j ).Or analogously up to terms in the approximation error- η/b-1∑_j = 0^k-2 (-ηλ_i)^jtrace( S_j ·d/dθ_i_z ∈ D (∇ L (z) ) ).Next note that we proved that S_j is approximately simultaneously diagonalizable with ∇^2 L, so in this basis it is diagonal and on the i-th element of the diagonal it is η^2/2c∑_l = j+2^k kl (-ηλ_i)^l-j-2that approximately corresponds toS_j[i,i] = η^2/2ckj+2 - kj+3ηλ_iif cλ_i ≪ 1[1-ηλ_i]^k/-ηλ_i^j+2 + kj+1 (ηλ_i)^-1 if cλ_i ≫ 1 .By expanding the trace as a sum over the eigenvalues λ_1, λ_2, …, λ_l, … of ∇^2 L, along the direction θ_i we can rewrite our steps as-η/b-1∑_l∑_j = 0^k-2 (-ηλ_i )^jS_j[l,l] ·d/dθ_i_z ∈ D (∇ L (z) )_l,l= -η/b-1η^2/2c∑_l∑_j = 0^k-2 (-ηλ_i )^j∑_h = j+2^kkh (-ηλ_l )^h-j-2·d/dθ_i_z ∈ D (∇ L (z) )_l,l .This concludes the proof of <ref> by calling R_i,j all the terms that multiply the derivative of the covariance. Now we compute the size of the elements R_i,j.Then when l ≠ i and cλ_l ≪ 1, is:-η/b-1η^2/2c∑_l∑_j = 0^k-2kj+2 (-ηλ_i )^jd/dθ_i_z ∈ D (∇ L (z) )_l,l.Thus the sum of these terms for l≠ i coincides with-η/b-1c/2[c∇^2L]^2†( [I-η∇^2 L]^k + c ∇^2 L - I ) d/dθ_i trace(_z ∈ D (∇ L (z) )).While when l≠ i, cλ_l ≫ 1 we approximately obtain-η/b-1η^2/2c∑_l∑_j = 0^k-2kj+1 (-ηλ_i )^j(ηλ_l)^-1d/dθ_i_z ∈ D (∇ L (z) )_l,l.Thus the sum of these terms for l≠ i coincides with-η/b-1c[c∇^2L]^†( [I-η∇^2 L]^k - I - [-η∇^2 L]^k ) trace([2c∇^2 L]^†d/dθ_i_z ∈ D (∇ L (z) )).Instead, the term l = i is the following-η/b-1η^2/2c∑_j = 0^k-2 (-ηλ_i )^j∑_h=j+2^k kh (-ηλ_i)^h-j-2d/dθ_i_z ∈ D (∇ L (z) )_i,i= -η/b-1η^2/2c∑_j=2^k (k-j+1)kj (-ηλ_i)^j-2d/dθ_i_z ∈ D (∇ L (z) )_i,i= -η/b-1[d/dθ_i_z ∈ D (∇ L (z) )_i,i] ·c4 if c λ_i ≪ 1 (k/2cλ_i^2 + 1/2λ_i) [1-ηλ_i]^kif c λ_i = O(η^-1) We can thus conclude that the step on the regularizer along the direction θ_i is -η/(b-1) times the derivative along θ_i of the quantity∑_j=0^k-2 (-ηλ_i)^j η^2/2c∑_λ_leigen- value of ∇^2 L∑_h=j+2^k kh(-ηλ)^h-j-2·_z ∈ D(∇ L(z))_l,l_trace of S_j ·_z ∈ D(∇ L(z)).This can be rewritten asReg_i=c(cλ_i)^-2( [I-ηλ_i]^k + c λ_i - I ) ∑_λ_l ≠λ_i small eigenvalue of ∇^2 L_z ∈ D (∇ L (z) )_l,l + c(cλ_i)^-1( [I-ηλ_i]^k - I - [-ηλ_i]^k ) ∑_λ_l ≠λ_ibig eigenvalue of ∇^2 L (cλ_l)^-1_z ∈ D (∇ L (z) )_l,l + c(cλ_i)^-1( (k-1) [I-ηλ_i]^k - cλ_i[I-ηλ_i]^k + ckλ_i - (k-1)) _z ∈ D (∇ L (z) )_i,iThus on the small eigenvalues |cλ| ≪ 1 we approximately have the following regularizerReg_i= c/4∑_λ_l small eigenvalue of ∇^2 L_z ∈ D (∇ L (z) )_l,l + ∑_λ_lbig eigenvalue of ∇^2 L (cλ_l)^-1_z ∈ D (∇ L (z) )_l,l .On the big ones cλ≫ 1 we have the following regularizerReg_i= λ^-1∑_λ_l small eigenvalue of ∇^2 L ∪λ_i_z ∈ D (∇ L (z) )_l,l + λ^-1 (1 + (-ηλ)^k) ∑_λ_l ≠λ big eigenvalue of ∇^2 L (cλ_l)^-1_z ∈ D (∇ L (z) )_l,l.§ SIZE OF THE ERRORWe show here some heuristics on how smaller the size of the error is compared to the size of the regularizer. Also, we discuss the ingredients of it. The size of the error. Assume that every eigenvalue λ of ∇^2 L satisfies -α/c, 2 + α/c for a constant α≥ 0. From the proof <ref>, precisely <ref> we see that the main part of the regularizer is smaller than c^3/12b∇^3 L _z ∈ D(∇ L(z)) O( 1 - η/b-1).along the near 0-eigenvalues of the Hessian ∇^2 L and it is upperbounded approximately byη c^2/4b [∇^2 L]∇^3 L _z ∈ D(∇ L(z)) O( 1 - η/b-1)+O(η/b-1) (GD error)on the other eigendirections. Here the constant in the big O is upperbounded by exp(α). Note that this is very small whenc ∇ L≪ 1 or c/12∇^3 L ≪ 1 or just ∇ L does not align well with the third derivative. In all the other cases, we have to add these parts to the regularizer and look deeper into the Taylor expansion for a small error part. Also, this is a coarse analysis as in reality some of these terms O(η∇ L)^2 are part of the derivative of the regularizer and they can be removed. The ingredients The ingredients of these quantities are the following, all multiplied between each otherη∇^3 L(·),I - η∇^2 L(·), η∇ L(·)and in particular, appearing in the formsV_3:=( _z ∈ D [∇^3L(z)^2] - ∇^3L^2 )H:= ( _z ∈ D [∇^2L(z)^2] - ∇^2L^2 ).Third derivative: ∇^3 L = o(1/η). We start our analysis by analyzing the size of the quantity η·∇^3 L (·)that appears in the error. Generally, ∇^3 L is O(1), or anyways much smaller than the Hessian. Even in the case of the edge of stability, the third derivative gets no bigger than η^-c, c<1. It has been observed empirically by <cit.> that that c may be equal to 34. We however conjecture that that exponent was due to the fact that they used MSE on a neural network with 3 layers and we conjecture that generally for MSE the size of the third derivative never crosses O(η^-2depth-3/2depth-2),and attains it at the edge of stability. This allows us to conclude that in the case of bigger eigenvalues of the Hessian, the error is almost always smaller than the regularizer (as we have η/b∇^3 L < 1, and in the case of 0 ones we have ∇^3 L = O(1) so it depends only on the size of c/12 and the size of the gradients if the result is applicable or not. The usual size of the Hessian. Both in the regularizer and in the error figure powers of [I - η∇^2 L]. In the regularizer the biggest exponent is k-2, in the error is 2k-2. Most of the eigenvalues of the Hessians are usually 0 or very small positive. In many situations, a few eigenvalues are in the range 1/η or a bit more, this induces some form of oscillation around the manifold of minima. All these eigenvalues λs of the mini-batch Hessians in the range between 0 and 2/η cause no issues. Indeed, in these cases, the big products above are just shrinking, as 1-ηλ∈ [-1,1] in that direction. In the case in which |1-ηλ| = 1 + α/c, with α > 1, then anyways the bounds hold but multiplied by a constant smaller than (1+α/k)^2k < exp(2α) for the error and smaller than (1+α/k)^k < exp(α) for the regularizer. Thus, e.g., if every eigenvalue λ of ∇^2 L is in λ∈(-α/c, 2/η+α/c) for reasonably small but O(1) constant α we have that we do not change regime: if it was, the error remains smaller than the regularizer.Where 1/n comes from. In ll our errors figure quantities that are 1/n or η/b smaller, those re due to <ref>. Precisely, it says that the expectation of the product is essentially the product of expectation plus a O(1/n) part. When we are taking together things like [I-η∇^2 L(·)] those are usually i < k and comes out an η. So in those cases we have O(η k/n) = O(η/b). § THE ERROR Δ - Α. Let us call E_k^SGD the error E_k^SGD = Δ_k^SGD - α_k^SGD. Then we have that up to lower order in η∇ L (so +O([η∇ L]^3) the error satisfiesE_k = Δ_k^SGD - α_k^SGD= Δ_k-1^SGD-η∇^2 L( B_k) [θ_1^GD, (k-1)η - θ + Δ_k-1^SGD] +higher order Taylor rest- [I - η∇^2 L( B_k) ] α_k-1^SGD+ η^2 ∇^2 L( B_k) [∑_i=1^k-1∇ L(B_i) ]=[I - η∇^2 L(B_k)] E_k-1^SGD + higher order Taylor rest.So we find the formula for it Keeping only the term with order 2 in ∇ L(·)s we obtainE_k^SGD =[I - η∇^2 L(B_k)] E_k-1^SGD + η/2∇^3 L (B_k) ⊗ [θ_k-1^SGD-θ]^2 =[I - η∇^2 L(B_k)] E_k-1^SGD + η/2∇^3 L (B_k) ⊗ [θ_1^GD, (k-1)η-θ+Δ_k-1^SGD]^2 =[I - η∇^2 L(B_k)] E_k-1^SGD + η/2∇^3 L (B_k) ⊗ [θ_1^GD, (k-1)η - θ + α_k-1^SGD + E_k-1^SGD ]^2This, again, up to second order in η∇ L isE_k^SGD =[I - η∇^2 L(B_k)] E_k-1^SGD + η/2∇^3 L (B_k) ⊗ [θ_1^GD, (k-1)η - θ + α_k-1^SGD]^2So by inductive hypothesis we have that the size of E_k^SGD is the size E_k-1^SGD plus terms that are of the size of O (η∇^3 L ⊗ [η k ∇ L]^2) that is O(η^3 k^3 ∇ L^2). Moreover, we obtain that up to higher order terms E_k^SGD=∑_i=2^k[ ∏_j=i+1^k [I - η∇^2 L(B_j)] ] η/2∇^3 L (B_i) ⊗ [ θ_1^GD, (i-1)η - θ + α_i-1^SGD]^2and more precisely this is 12 multiplied by-η^3 ∑_i=1^k[ ∏_j=i+1^k [I - η∇^2 L(B_j)] ] ∇^3 L (B_i) ⊗[∑_j=1^i-1[ ∏_h=j+1^i-1 [I - η∇^2 L(B_h) ] ] ∇ L(B_j) ]^2- η∑_i=1^k[ ∏_j=i+1^k [I - η∇^2 L(B_j)] ] ∇^3 L (B_i) ⊗( θ^GD,(i-1)η_1 )^2. + 2η∑_i=1^k[ ∏_j=i+1^k [I - η∇^2 L(B_j)] ] ∇^3 L (B_i) ⊗[ θ^GD,(i-1)η_1 ] ⊗α_i-1^SGD.Now, defining analogously to Δ and α the error E_k^GD. Concluding, we have thatθ_k^SGD-θ_k^GD = Δ_k^SGD-Δ_k^GD = α_k^SGD-α_k^GD + E_k^SGD-E_k^GDand ℛ := α_k^SGD-α_k^GD is exactly the regularizer appearing in the statement and ℰ := E_k^SGD-E_k^GD is the error. We will see that once computing the expectations more terms cancel out in the error and the regularizer, as for <ref>.Note again that this holds for every version of SGD. In particular, there is an analogous expansion for every first-order method.The second and third terms of <ref>. When computing ℰ := E_k^SGD-E_k^GD the parts relative to the second and third terms of <ref> are easier to handle. Precisely, the subtraction of the term in the second line after applying <ref> becomes a sum of order η k/n - k O( GD error )= η/b-1 O( GD error )so even smaller than what it would be. Similarly, also the third term is of the same size, by removing and adding 2η∑_i=1^k∑_i=1^k[I - η∇^2 L]^k-i-1∇^3 L ⊗[ θ^GD,(i-1)η_1 ] ⊗α_i-1^SGDThen we have that the difference between the SGD part and this is O(1/(n-k)) times the GD error and same for this minus the GD error. So, this part was about5 η k/n-k = 5η/b-1smaller than the GD error. The main part of the error comes from the difference of the terms in the first line. The first line of <ref>. Next note that part of this falls into the regularizer, precisely all the summands of the formη^3 [ ∇ L(B_l)^⊤∇^3 L(B_i) ∏_h=j^i[I- η∇^2 L (B_h)] ∇ L(B_j) ]. The remaining terms can be treated as the regularizer. In particular, we rewrite it as - η^3 ∑_i=1^k[ ∏_j=i+1^k [I - η∇^2 L(B_j)] ] ∇^3 L (B_i) ⊗[∑_j=1^i-1[ ∏_h=j+1^i-1 [I - η∇^2 L(B_h) ] ] ∇ L(B_j) ]^2_SGD part -η^3 ∑_i=1^k[I - η∇^2 L]^k-i-1∇^3 L ⊗[∑_j=1^i-1[ ∏_h=j+1^i-1 [I - η∇^2 L(B_h) ] ] ∇ L(B_j) ]^2+η^3 ∑_i=1^k[I - η∇^2 L]^k-i-1∇^3 L ⊗[∑_j=1^i-1[ ∏_h=j+1^i-1 [I - η∇^2 L(B_h) ] ] ∇ L(B_j) ]^2+ η^3 ∑_i=1^k[I - η∇^2 L]^k-i-1∇^3 L ⊗[∑_j=1^i-1 [I - η∇^2 L ]^i-j-2∇ L ]^2_GD part.The expectation of the difference between third and fourth line, using <ref>, when I - η∇^2 L(B_i)≤ 1+ϵ is smaller in size or equal toη^3 ∑_i=1^k[I - η∇^2 L]^k-i-1∇^3 L ⊗( ∑_j=0^i-1 O ( n(j-1) - (j-1)^2/(n-1)b_z ∈ D(∇ L) ) )where the constant of the big O is Oϵ i to the first order in ϵ,i, and it is exponentially small in the directions in which |I-η∇^2 L| < 1. This sums up to something smaller than η^3∑_i=1^k[I - η∇^2 L]^k-i-1∇^3 L O ( n(j-1) - (j-1)^2/(n-1)b_z ∈ D(∇ L(z))) < 1/2b O( [∇^2 L]^3†max{2[I - η∇^2 L]^k - c^2 (∇^2L)^2 + 2c∇^2L -2 }) ∇^3 L_z ∈ D(∇ L(z))analogously, this is upper-bounded approximately byc^3/6b∇^3 L _z ∈ D(∇ L(z))along the 0-eigenvalues of the Hessian ∇^2 L and it is upperbounded approximately byη c^2/2b [∇^2 L]∇^3 L _z ∈ D(∇ L(z))along the other directions. The expectation of the difference between the terms in first and second line is instead <ref> of order approximately 1/n of it, indeed we can upperbound it with-η/b-1( GD part + part above). Summing all up. This means that summing all up we obtain that the error, when -ϵ < η∇^2 L< 2+ϵ the part coming from the first line of <ref> is smaller thanc^3/12b∇^3 L _z ∈ D(∇ L(z)) O( 1 - η/b-1)+O(η/b-1) (GD error).along the 0-eigenvalues of the Hessian ∇^2 L and it is upperbounded approximately byη c^2/4b [∇^2 L]∇^3 L _z ∈ D(∇ L(z)) O( 1 - η/b-1)+O(η/b-1) (GD error).On the other eigendirections. Note that this is even smaller when the noise covariance does not properly align with the eigenvectors of the highest eigenvalues of the third derivative. when c ∇ L≪ 1 and c/12∇^3 L ≪ 1 or just ∇ L does not align well with the third derivative. In all the other cases, we have to add these parts to the regularizer and look deeper into the Taylor expansion for a small error part.§ PROOF OF <REF>[Regularizing the trace of Hessian, <ref>] Assume all the training data are correctly classified. ThenFisher= _z ∈ D [ ∇ L ∇ L^⊤]= ∇^2 L=Hessian.In particular, the local minima of trace(S ·Fisher) coincides with the local minima of trace(S ·Hessian).Note that L(θ, (x,y))=- log( p(y | x,θ) )Indeed, note that the approximate Fisher information matrix we work with isFisher= _z ∈ D[ ∇ L ∇ L^⊤]= _z ∈ D[ ∇log( p(y | x,θ) ) ∇log( p(y | x,θ))^⊤].In the final part of the training, when on the training data points we have that the labels according to our model correspond with the ones on the training dataset, we have that _(x,y) ∈ D[ ∇ L ∇ L^⊤]= _x ∈ D[ _y ∼ p(· | x,θ)[∇log( p(y | x,θ) ) ∇log( p(y | x,θ))^⊤] ].As y_i = y∼ p(· | x_i,θ) for all z = (x_i,y_i) ∈ D. This is equal to the Hessian, indeedHessian= _z ∈ D[ ∇^2 L]= _x ∈ D[ _y ∼ p(· | x,θ)[∇^2 log( p(y | x,θ)) ] ].Thus noticing that _y ∼ p(· | x,θ) [∇^2 p(y | x,θ)] = ∇^2_θ[p(y | x,θ)] = d^2/d θ^21 = 0, we conclude thatFisher=Hessian.This proves <ref>.§ ON THE GRADIENTS OF NEURAL NETWORKS For a neural network decreasing ∇_θ f(θ,x) essentially corresponds to decreasing ∇_x f(θ,x). Precisely, let σ be an activation function, let W_i be the weight matrix of i-th layer, then enlarging the weight matrices to account for the biases,f(θ, x)=W_L ·σ∘ W_L-1∘…·σ∘ W_2 ·σ∘ W_1 · xIts derivative in the input isdf(θ, x)/d x =W_L ·σ' · W_L-1·…·σ' · W_2·σ' · W_1while its derivative in the parameters ϑ entry of W_i is the product of the derivatives of the activations and the weight matrices of the following layers, multiplied by the preactivation of the i-th layer.df(θ, x)/dϑ = (W_L ·σ' · W_L-1·…·σ' · W_i+1·σ' ) ·σ∘ W_i-1·…σ∘ W_2 ·σ∘ W_1 · xThus making the norm of the parameters' gradient smaller means corresponds to reducing the norm of the sum of all the preactivations of the neurons multiplied by the product of the coming weights, thus reducing the norm of the inputs' gradient § SADDLES Assume θ is a stationary point for L. We analyze here the dynamics in the direction of a unitary eigenvector v of the eigenvalue λ of ∇^2 L. Define u:= 1/n∑_z ∈ D⟨ v_λ, ∇^2 L(z) ∇ L(z) ⟩≠ 0.First note that the update due to k steps of GD from θ, in our regime, is θ_new - θ = [∇^2 L]^-1 (exp(-c∇^2 L) - I)∇ L(θ) =: β∇ L.Indeed at every step i < k we have that θ_i+1-θ =-η∇ L + [I- η∇^2 L] (θ_i-θ)=- η∑_j=0^i [I-η∇^2 L]^i-j∇ L.Taking the limit we have the definition of β. We see here how far we go after m epochs in the direction of the eigenspace of λ assuming c∇ L << 1, then denote β:= λ^-1 (exp(-cλ) - I) the update due to GD is +β∇ L and denote the step on the regularizer α := η/b-1 S u.And in general We have that after m steps SGD in expectation takes the parameter vector away of the following quantity-(λβ)^-1((1 + λβ)^m - 1) α_SGD regularizer effect + λ^-1((1 + λβ)^m - 1)∇ L_GD effect.In the escaping directions, for cλ = η k λ < 0, the regularizer part isηb (exp(-cλ)-1)^-1(exp(-c m λ) - 1) λ^-2 (exp(-cλ)+cλ-1) u .Then, even forgetting about the ∇ L part, so even if we start exactly from a flat areaθ_m - θ_m-1 =-(λβ)^-1λβ (1 + λβ)^m-1αThat is equal toθ_m - θ_m-1 =- (1 + λβ)^m-1αThus we haveθ_m - θ_m-1 = ηbexp(-c (m-1) λ) λ^-2 (exp(-cλ)+cλ-1) u.Thus, since λ^-2 (exp(-cλ)+cλ-1) > c/2 - c^2λ/6 for all cλ < 0 we have thatθ_m - θ_m-1^2 > η^2/b^2exp(- 2 c (m-1) λ) ( c/2 - c^2 λ/6)^2 u^2.Thus3λ/2∑_i=1^m θ_i - θ_i-1^2< 3 η^2 c^2 λ u^2/8 b^2( 1 - c λ/3)^2 ∑_i=0^m-1exp(- 2 c i λ)So, we have that after m steps we reduced the loss by at least of3 η^2 c^2 λ u^2/8 b^2( 1 - c λ/3)^2exp(- 2 c m λ) - 1 /exp(- 2 c λ) - 1 .Then we have that the loss decreased by O(1) after a number of steps (epochs) # epochs> 1/2|cλ|log( b^2/η^2 c^4 λ u^2) + 2.This means after # epochs> log(η) + log(u) +2 log(c) - log(b)/cλ + 2.So after this number of epochs, the saddle was escaped only thanks to the effect of SGD.Even in the case in which the saddles is higher order or the negative eigenvalue is so small that |cλ| ≪ 1 we have that the effect of SGD pushes you away, indeed in this case θ_m - θ =m α =m·η/b-1c/2 uso after m = b/η epochs θ_m - θ = O(1) and if the first non-zero derivative is the i-th we have the loss to decrease likeAnd (- α)^i∑_h=0^i∇^i L/h!(1 + ∑_l=2^m (l-1)^i-hl^h-j) in particular, is about the quantity∇^i L(- α)^i[∑_h=0^i1/h!]∫_0^m x^i dx= ∇^i L(- α)^iem^i+1/i+1.Thus the loss is smaller O(1) if m is such that (taking the logarithms(i+1) log(m) + i log(|α|) ∼ 0so ifm ∼2b/η c u no matter the order of the saddle. | http://arxiv.org/abs/2312.16143v1 | {
"authors": [
"Pierfrancesco Beneventano"
],
"categories": [
"cs.LG",
"math.OC",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20231226180648",
"title": "On the Trajectories of SGD Without Replacement"
} |
The objective of Active Learning is to strategically label a subset of the dataset to maximize performance within a predetermined labeling budget. In this study, we harness features acquired through self-supervised learning. We introduce a straightforward yet potent metric, Cluster Distance Difference, to identify diverse data. Subsequently, we introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data. Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%. Moreover, we assess the efficacy of our proposed framework under extended settings, encompassing both larger and smaller labeling budgets. Experimental results demonstrate that, when labeling 80% of the samples, the performance of the current SOTA method declines by 0.74%, whereas our proposed BAL achieves performance comparable to the full dataset. Codes are available at https://github.com/JulietLJY/BALhttps://github.com/JulietLJY/BAL. Active Learning, Computer Vision, Contrastive LearningBAL: Balancing Diversity and Novelty for Active Learning Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Member, IEEE and Jiaya Jia, Fellow, IEEE Jingyao Li and Shaozuo Yu are with the Department of Computer Science and Engineering of the Chinese University of Hong Kong (CUHK) Jingyao Li's E-mail: [email protected] Pengguang Chen, Shu Liu, and Jiaya Jia are with SmartMore. Manuscript received Feb 8th, 2023. Received ...; accepted... ====================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONReducing the time and cost associated with data labeling has posed a longstanding challenge for deploying large-scale deep models across various computer vision tasks <cit.>. Several algorithms have been developed to address labeling costs, including semi-supervised learning <cit.>, weakly supervised learning <cit.>, few-shot learning <cit.>, and active learning <cit.>. Among these, Active Learning (AL) stands out as a facilitator, enabling the selection of the most valuable data to achieve optimal performance within a fixed labeling budget. The AL process commences with an unlabeled pool of samples. In each cycle, K additional samples, equivalent to the budget, are selected for labeling. Previous studies have primarily focused on small sample budgets, inevitably resulting in diminished model performance compared to results obtained with the complete dataset. However, our proposed active learning approach reveals that judiciously selecting a subset of data can yield comparable results to using the entire dataset. Therefore, our method proves valuable for conserving labeling budgets in scenarios where performance decline is not acceptable.The conventional approach in active learning involves selecting diverse or uncertain samples. Distribution-based methods sample data from high-density regions <cit.> that effectively represent the overall feature distribution. Conversely, uncertainty-based methods <cit.> concentrate on sampling the most uncertain data, often measured by posterior probabilities <cit.>, entropy <cit.>, among other metrics. Earlier active learning methods <cit.> traditionally relied on the primary task network to select diverse or uncertain samples. Subsequent studies <cit.>, however, employ pretext tasks as scoring networks. For acquiring diverse data, pretext-based methods create multiple sub-pools from the entire dataset based on specific properties. In each cycle, they employ uncertainty-based samplers to select from the corresponding sub-pool, resulting in labeled samples within each sub-pool. Consequently, these samples collectively represent the entire data distribution, while uncertainty-based samplers ensure diversity. The design of the indicator for generating sub-pools is crucial. Existing pretext-based methods typically leverage the output of an additional self-supervised head <cit.> or the pretext task loss <cit.>. However, we observe that neither an additional head nor pretext loss achieves comparable performance with the finely learned features of the pretext task (<ref>). These features encompass information on inter-sample relationships and significantly benefit downstream tasks. In this paper, we initially cluster features derived from the self-supervised pretext task. We then introduce a straightforward yet impactful indicator known as Cluster Distance Difference (CDD) within the feature space. CDD attains its minimum point, zero, when a feature resides on the decision boundary of the two nearest cluster centers, signifying the most challenging sample to differentiate. Our approach substitutes the decision surface solution with a distance difference, achieving computational efficiency while maintaining the desired effectiveness. CDD surpasses the previous indicator, the pretext task loss, in the current state-of-the-art method, PT4AL <cit.>, by 1.36%. Additional results are detailed in <ref>. Another critical aspect is balancing uncertainty and diversity to label a more reasonable distribution of samples. Existing methods, such as PT4AL <cit.>, employ equidistant disjoint sub-pools, but determining the size of each sub-pool poses challenges. A small sub-pool has a greater impact on the order of CDDs, limiting the choice of the uncertainty-based sampler. For instance, when the sub-pool contains only K samples, the selected data comprises the entire sub-pool, rendering the uncertainty-based sampler unused. Conversely, with a large sub-pool, selected samples exhibit more uncertainty but less diversity. In extreme cases where sub-pools encompass the entire dataset in cycles, the pretext-based approach regresses into a simplistic uncertainty-based method.In our study, we introduce the Balancing Active Learning (BAL) framework, as illustrated in <ref>, to tackle this challenge. Unlike conventional pretext-based approaches, BAL facilitates overlaps between adjacent sub-pools. It dynamically modifies the size of each sub-pool based on early performance using a small labeled pool. Consequently, the number of samples in each sub-pool can dynamically align with the demands of various labeling budgets. Further insights can be found in <ref>. In our experimental evaluation, our approach demonstrates a superior performance compared to the current SOTA across multiple datasets, achieving a notable improvement of 1.20% on widely recognized benchmarks, as detailed in <ref>. Furthermore, we assess our active learning approach under large labeling budgets in <ref>. Notably, when labeling 80% of samples, the performance of the existing SOTA method <cit.> diminishes by 0.74%, while our BAL approach maintains its efficacy. These results suggest that our sampled data provides comparable information to represent the entire dataset. Subsequently, we investigate the performance of BAL under small labeling budgets. Notably, BAL consistently outperforms typical methods by substantial margins, even outperforming current SOTAs in common benchmarks, such as PT4AL <cit.> and ActiveFT <cit.>. Particularly, when training on only 4% of the total dataset, BAL exhibits an impressive performance advantage, surpassing the current SOTA by 11% on CIFAR-10 and by 26% on SVHN.In summary, our main contributions are threefold:* We introduce the Cluster Distance Difference (CDD) as a simplified yet effective indicator for measuring the distance to the decision surface, providing a nuanced measure of clustering difficulty.* We propose the Balancing Active Learning (BAL) framework, which adeptly balances diversity and uncertainty, enabling the adaptive selection of optimal data across various settings.* We demonstrate BAL's superior performance over existing active learning methods on diverse datasets and labeling budgets, establishing its robustness and versatility.§ RELATED WORKS §.§ Active learning The active learning strategy typically involves sampling data with substantial uncertainty or diversity.Uncertainty (e.g., novelty, confusion) in active learning refers to a sample's ability to provide new information independently of other labeled samples. Various uncertainty methods employ different metrics to measure uncertainty, including predicted class posterior probability <cit.>, the difference between the first and second class predicted posterior probabilities <cit.>, entropy of samples <cit.>, distance from the support vector machine to the decision boundary <cit.>, inconsistencies between a committee of multiple independent models <cit.>, Bayesian frameworks <cit.>, etc.Diversity (e.g., representativeness, coverage) refers to a sample's ability to represent the distribution of the unlabeled data effectively. In existing uncertainty methods, information density approaches <cit.> weigh the informativeness of a sample by its similarity to other data in the input distribution. The nearest-neighbor method <cit.> selects samples that are most unlike the labeled instances and similar to the unlabeled samples. Coreset <cit.> is a method based on identifying a core set that models the empirical loss over the set of labeled samples and the pool of query samples. VAAL <cit.> is designed to learn a good representation using a variational autoencoder.The drawback of distribution-based approaches is that data near the classification boundary may confuse the model. In contrast, uncertainty-based approaches may sample overlapping data and find it challenging to extract a representation of the entire data distribution. In comparison, our work leverages self-supervised features to balance diverse and uncertain samples, benefiting from both aspects.§.§ Representational Learning In representational learning, self-supervised models trained on unlabeled datasets acquire features transferable to downstream tasks. Common self-supervised transformations include color removal <cit.>, resolution reduction <cit.>, partial image obscuring <cit.>, spatial order confusion of sub-images <cit.>, and random geometric transformations <cit.>, among others. Features learned through these tasks are subsequently applied to more complex downstream tasks <cit.>. Recent work <cit.> showcases advanced results and underscores the robustness of contrastive learning in various downstream tasks.The latest active learning approaches <cit.> incorporate self-supervised learning as a pretext task, yielding impressive results. PAL <cit.> relies on the output of an additional self-supervised head trained in parallel with the task network. However, the self-supervised head is inadequately trained, limiting its performance. PT4AL <cit.> pre-trains the self-supervised task network first and employs its losses to partition unlabeled samples for cycles. Nevertheless, pretext task losses are generally designed to update parameters rather than facilitate downstream tasks.In contrast to prior pretext-based methods, BAL (i) harnesses finely-learned self-supervised features that encompass information on inter-sample relationships, benefiting downstream tasks significantly, and (ii) dynamically adjusts the length of adaptive sub-pools to meet the requirements of different labeling budgets. In <ref>, experiments validate that BAL outperforms all previous active learning methods across multiple datasets.§ METHODSIn this section, we introduce our proposed Balancing Active Learning (BAL) framework. Its structure is shown in <ref>.We first define the notations. A typical active learning scenario consists of the unlabeled data pool X_U∈ X and labeled data X_L∈ X, where X is the dataset. The goal of an active learner is to learn an effective model F_m(·) with limited labeling budgets. In the i-th cycle, samples are selected from X^i_U and labeled by the active learner, resulting in newly labeled data X^i_K. The labeled pool is updated according to X^i_L=X^i-1_L∪ X^i_K and main task model F^i_m(·) is trained on a new labeled set X^i_L. §.§ Balancing Active LearningActive learning approaches usually select uncertain <cit.> or diverse <cit.> data. Uncertain data represents the data close to decision boundaries, while diverse data defines distribution in the feature space well. Existing research <cit.> with a simple combination of the two aspects hardly reach the balance, which varies according to different proportion of labeling. To extract a more reasonable distribution of samples, we propose Balancing Active Learning (BAL). Its main process is as follows.* Train a self-supervised model F_ss on unlabeled X.* Cluster features {f_j}_j=1^N extracted from F_ss and sort X in ascending CDD order.* Automatically set balancing factor βbased on the maximal performance of the main task model F_m trained on the early labeled pool.* Generate adaptive sub-pools {P_U^i}^I_i=0 from X.* Select K samples from P_U^i for labeling based on posterior probabilities at i-th cycle.* Repeat Step 4 until meeting labeling budgets. In each cycle, we sample from the corresponding sub-pool. Thus there exist labeled samples in each sub-pool. Consequently, these samples together represent the whole data distribution. On the other hand, the uncertainty-based sampler ensures novelty. Our adaptive sub-pools balance the two aspects in <ref>. In the following sections, we introduce two stages of our BAL framework, including the self-supervised task in <ref> and the active learning module in <ref>. §.§ Self-Supervised TaskTraining Strategy. The main purpose of the self-supervised task module is to extract features that can be well passed on to the downstream active learning task. Past researchers <cit.> have found that compared to the latest self-supervised learning methods such as SimCLR <cit.> and SimSiam <cit.>, rotation prediction <cit.> is better at self-supervised task learning. The loss function is defined asL_ss(x_i)=1/k∑_θ∈{0,90,180,270}L_CE(l_ss(t(x_i, θ)), y_θ), where L_CE is the cross-entropy loss. t(x_i, θ) represents an image that rotates the input image x_i by θ degree. l_ss(·) represents the output of the self-supervised model F_ss(·). y_θ represents labels. §.§.§ Cluster Distance Difference.After training of self-supervised model F_ss, we leverage its features {f_i}_i=1^N to sort the unlabeled dataset X, as shown in <ref>. It has been experimentally verified that the performance of the pretext task and that of the main task has a strong positive correlation <cit.>, which supports our utilization of self-supervised features. Firstly, we perform K-means clustering <cit.> on {f_i}_i=1^N to cluster them into N_k clusters. Next, we design α (f_i) as the difference between f_i to the two nearest cluster centers, to measure the difficulty of clustering f_i.d_1 (f_i) =min_1≤ j≤ N_k{||f_i, c_j||_2^2}, d_2 (f_i) =min_1≤ j≤ N_k, jmin_j{||f_i, c_j||_2^2}{||f_i,c_j||_2^2},α (f_i)= d_2 (f_i) - d_1 (f_i),where ||f_i, c_j||_2^2 is Euclidean distance from f_i to cluster center c_j. d_1 (f_i) represents distance from f_i to the nearest cluster center and d_2(f_i) represents the second shortest distance. α(f_i) is the value of our proposed indicator, cluster distance difference (CDD), of f_i. Finally, we sort X in ascending CDD order, resulting in sorted data X_S, which starts from samples with the smallest CDD, i.e., the most difficult sample to distinguish. §.§.§ Explanation of Cluster Distance DifferenceIn a simplified scenario, we ignore certain situations such as non-convex decision boundaries or overlap between clusters. Our goal is to find a dividing hyperplane (threshold) of the two clusters, which is defined by the equality of the distances from data point f_i to the two nearest centers c_1 and c_2:d_1 (f_i) = d_2 (f_i).Among <ref>, if d_1 (f_i) is smaller, f_i is closer to Cluster 1; if d_2 (f_i) is smaller, f_i is closer to Cluster 2. When <ref> holds true, f_i lies on the boundary hyperplane, indicating that it is situated on the border between the two clusters. Therefore, <ref> signifies that the boundary hyperplane can be represented as:α (f_i) = 0, where f_i represents the data point in the feature space and α is its CDD. When the CDD α(f_i) is smaller, it is more difficult to distinguish which cluster f_i belongs. For example, when applying the CDD to the scenario shown in <ref>, α (f_1) =||f_1, c_1||_2^2-||f_1, c_2||_2^2=0 <||f_2, c_1||_2^2-||f_2, c_2||_2^2=α (f_2),which represents that f_1 is more difficult to be distinguished than f_2. When a feature f_i is located on the decision plane, α(f_i)=0 reaches the lowest point, representing the most difficult to distinguish.Our method replaces the solution of the decision surface with a distance difference, which saves large computation cost as well as achieve the desired effect. In comparison, when applying a naive baseline, distance to the nearest cluster center, in <ref>, it mistakenly regards f_2 as more difficult to cluster. When comparing with another indicator, loss of pretext task, in the current SOTA <cit.>, CDD outperforms it by 1.36% (<ref>). We explain that the pretext task loss is designed to update the parameters instead of passing them on to downstream tasks. On the contrary, well-learned self-supervised features include the information of inter-sample relationships and benefit downstream tasks outstandingly. §.§ Active Learning ModuleThe algorithm of our Balancing Active Learning framework is shown in <ref>. §.§.§ Adaptive Sub-pools. We generate sub-pools {P_U^i}_i=1^I from X_S asP_U^i(β) = {x_k}_k=1^βN/I, if i=1, {x_k}_(k=i+1-β/2)N/I^(i+β+1/2)N/I / {X_L^j}_j=1^i-1, if 1<i<I, {x_k}_k=N-βN/I^N / {X_L^j}_j=1^i-1,if i=I,where X_L^i is the labeled pool in the i-th cycle. I is the number of cycles. N is the number of total images. x_k is the k-th sample in the sorted dataset X_S, and β is the balancing factor of diversity and uncertainty. The number of samples in each P_U^i is as|P_U^i| = ⌊βN/I⌋.The minimum value of β is ⌈KI/N⌉, where K is the number of samples to label in each cycle. It is because that P_U^i requires at least K samples for labeling. §.§.§ Sampling Method.In the first cycle, the sampler ψ selects the first K data from sorted unlabeled pool X_S. To obtain uncertain data, in the i-th (i>1) cycle, ψ selects K data with the lowest maximum posterior probability <cit.>, calculated by F^i-1_m fromP_U^i. Thus, the newly labeled data X_K^i is asX_K^i= ψ(P^i_U) ={x_k}_k=1^K, ifi=1, min_K{max{l^i-1_m(P_U^i)}}, ifi>1,where l_m^i-1 is output of main task model F_m^i-1 trained in the (i-1)-th cycle. P_U^i is the i-th sub-pool, and K is the number of data to label in each cycle.§.§.§ Balancing Factor.Existing approaches <cit.> only utilize equidistant disjoint sub-pools, which cannot meet varying requirements. When |P_U^i| is small, how to derive sub-pools {P_U^i}_i=1^I has greater influence, while choices of uncertainty-based sampler ψ is constrained. For example, when β=⌈KI/N⌉, there are only K samples P_U^i (|P_U^i|=K). Then uncertainty-based sampler ψ has no choice but label the total sub-pool P_U^i, that is, X_U^i=P_U^i. It means that ψ is unutilized. On the contrary, when |P_U^i| is large, samples are mostly gathered along the decision boundary and become less diverse. In the extreme case that P_U^i is the complete dataset X_U, the pretext-based approach deteriorates to a simple uncertainty-based method. In our work, the balance of uncertainty and diversity is reached by a balancing factor β. Our BAL allows adjacent sub-pools P_U^i to have overlaps (β>1) or intervals (β>1), as shown in <ref>. Thus the length of each sub-pool |P_U^i| can dynamically meet the requirements of different labeling budgets. The balancing factor β is chosen based on the maximal performance of the main task model F_m trained on X_L^2, asβ = β_j^*, wherej^* = max_1≤ j ≤ N_β{F^2_m(β_j)},where F^2_m(β_j) is trained on X_L^1∪ψ(P_U^2(β_j)), which is quite small (just 15% in common experiments). Thus, the balance of diversity and uncertainty can be easily reached without much computation requirements. In <ref>, experiments show that our adaptive sub-pools outperform uniformly-divided sub-pools by 1.0% on Caltech-101. The algorithm of Balancing Active Learning is in <ref>.§.§.§ Cases Analysis.Next, we analyze different cases of the balancing factor β. From <ref>, when 2<i<I, {P_U^j}_j=1^i-1 ends at the (i+β-1/2)-th sample while P_U^i starts at the (i+1-β/2)-th sample.Case1: β > 1. In this case, we have i+1-β/2>i+β-1/2, so there exists an overlap of P_U^i and {P_L^j}_j=1^i-1. Since some samples may have already been labeled in previous cycles, we need to remove {X_L^j}_j=1^i-1 from P_U^i, as shown in <ref>.Case2: β < 1. In this case, we have i+1-β/2 < i+β-1/2, and there exists an interval of P_U^i and {P_L^j}_j=1^i-1. Thus, <ref> can be simplified asP_U^i(β) = {x_k}_k=1^βN/I, if i=1, {x_k}_k=(i+1-β/2)N/I^(i+β+1/2)N/I, if 1<i<I, {x_k}_k=N-βN/I^N,if i=I.Case3: β = 1. In this case, we have |P_U^i|=⌊N/I⌋, and there is no overlap or interval between nearby sub-pools, which shows that we obtain the sub-pools {P_U^i}_i=1^I by uniformly divide the sorted dataset X_S. Therefore, <ref> deteriorates to evenly-divided sub-pools asP_U^i(1) = {x_k}_k=iN/I^(i+1)N/I, 1≤ i≤ I,where I is the number of cycles. N is the number of images and x_k is the k-th sample in the sorted dataset X_S. § EXPERIMENTS In this section, we perform various experiments for different labeling budgets to demonstrate the effectiveness of our proposed Balancing Active Learning (BAL) framework. In <ref>, we introduce included datasets, competitive approaches, and implementation details. In <ref>, we perform our approach for medium labeling budgets and compare it with the most advanced methods. In <ref>, we test our method for small labeling budgets to further demonstrate its robustness. In <ref>, experiments with large data sampling are performed and it has been shown that our proposed method on carefully selected 80% of the dataset could achieve the result on the full dataset. In <ref>, we illustrate some visualization for vivid exhibition. §.§ Configuration In this section, we introduce our experimental configuration details including datasets, competitive approaches, configurations and labeling budgets.§.§.§ Datasets. Our experiments are conducted on extensive datasets including: * SVHN <cit.>, from Google Street View images, 10 categories of 32×32-pixel images.* CIFAR-10 <cit.>, 10 categories, each containing 6000 32×32-pixel images.* Caltech-101 <cit.>, 101 uneven categories, each containing between 40 and 800 300×200-pixel images.* Tiny-ImageNet <cit.>, 200 classes of 256×256-pixel images. §.§.§ Competitive approaches.We compared our approach to the following active learning strategies:* Random sampling: which is the simplest baseline.* Entropy: which uses the entropy of class probabilities to predict the uncertainty of samples.* Confidence: which uses the highest probability score to predict the uncertainty of samples.* VAAL <cit.>: which utilizes the variational autoencoder (VAE) to learn the feature space, and the discriminator to perform reverse training on the input data to determine whether it is labeled.* DBAL <cit.>: which performs Bayesian CNNs to estimate the uncertainty of unlabeled points.* coreset <cit.>: which is a representative of distribution-based approaches and selects data that cover all highly diverse data according to the feature distribution.* PAL <cit.>: which adds a self-supervision head to the original classification network, and trains the self-supervision task in parallel with the original classification task.* PT4AL<cit.>: which sorts the unlabeled data in descending order of self-supervised task losses and divided into batches for each active learning cycle.* ActiveFT <cit.>: which selects a subset of data with a similar distribution of the entire unlabeled pool and enough diversity by optimizing a parametric model.§.§.§ Configuration.Hot-start techniques (PT4AL<cit.> and ours) label the initial samples through their respective techniques. Remaining techniques share an initial labeling sample set. We perform VGG16<cit.> as backbones. For medium labeling budgets, to align the configuration, we follow the previous research <cit.> and train the model from the weight in the latest cycle. For large labeling budgets, we train the model from scratch in each cycle to demonstrate the value of our sampled data. We calculate the average accuracy of the three runs. More details are <ref>.§.§.§ Proportion of labeled data.The evaluation criteria for active learning methods are usually the accuracy when a fixed proportion of labeled data λ=IK/N is used, where N is the number of the dataset; I is the number of active learning cycles; K is the number of labeled samples in each cycle. We verify our approaches threefold: * First, we verify our approach in a commonly-acknowledge benchmark - The initial pool accounts for 10% of the entire dataset, and an additional 5% of the dataset selected through various active learning techniques is added to each query until it reaches 40%.* Then, we test our method for large labeling budgets. The initial pool labels 10%, and we add 10% in each cycle until it reaches 100%, where the model on 100% is a normal classification task. * Finally, we perform our method for small labeling budgets. The initial pool comprises 2% of the total dataset, and with each query, an extra 2% of the dataset is incrementally incorporated using diverse active learning methods until it reaches a total of 10%.In summary, our approach has been extensively validated across a range of datasets and labeling budgets, and compared with various approaches, ensuring its robustness and versatility.§.§ Medium labeling budgetsFirstly, we compare the performance of uniformly-divided sub-pools <cit.> and adaptive sub-pools, where balancing factor β=1.3 is chosen according to the aforementioned mechanism (<ref>). Results are illustrated in <ref>. Our adaptive sub-pools outperform the other consistently on three datasets. We note that when performing on Caltech-10l, where the categories distribute unevenly, adaptive sub-pools show great prior - Its accuracy is 1.0% higher than naive sub-pools. Next, we test our approach for standard medium labeling budgets. Results are shown in <ref> and illustrate in <ref>. We compares the performance of various techniques with different labeled proportion λ. It shows that BAL outperforms other methods with large margins. On CIFAR-10, performance of BAL with λ=15% is close to that of the current SOTA, PT4AL <cit.>, with λ=20%, saving 25% of the data labeling. On SVHN, BAL with λ=15% even defeatsPT4AL with λ=25%. On Caltech-101, BAL boosts the result of the current SOTA by 5.4% at λ=35%. §.§ Large Labeling Budgets In this section, we validate our method for large budgets. The sampler ψ needs to select more data in each sub-pool due to a larger labeled proportion λ. In this way, if the same balancing factor β is performed, the sampled data will be generally determined by sub-pools {P_U^i}_i=1^I, while the impact of uncertainty becomes less. Thus, the sampled data will be more diverse but less uncertain. In order to balance diversity and uncertainty again, BAL set a higher balancing factor β=2.0 to obtain a large |P_U^i| in each cycle and thus ensure sufficient choices of the uncertainty-based sampler. Results are shown in <ref> and illustrate in <ref>. It shows that the performance based on 80% samples selected by the current SOTA, PT4AL <cit.> is 0.74% lower than that on the full dataset, while our approach is on par with the latter. It shows that BAL can sample valuable data which is able to offer comparative information with the whole dataset. Thus, we only need to train on our selected partial dataset and save 20% labeling budget without performance deterioration.§.§ Small labeling budgets The prior theoretical analysis <cit.>, indicates a behavior analogous to a phase transition phenomenon: under limited budget conditions, it is more beneficial to query typical examples, while in situations with a larger budget, the most effective strategy is to query atypical or unrepresentative examples. Therefore, to further highlight the resilience of our Balancing Active Learning approach, we conduct experiments with limited labeling budgets. The outcomes, displayed in <ref> and visually depicted in <ref>, involve a performance comparison among diverse techniques across varying labeled proportions λ. Notably, BAL consistently performs outstandingly even when the current SOTAs in common benchmarks, PT4AL <cit.> and ActiveFT <cit.>, fail. When training on 4% of the total dataset, BAL outperforms the second-best method by 11% on CIFAR-10 and by 26% on SVHN. §.§ Visualization We illustrate the t-SNE visualization of features for vivid exhibition in <ref>. The samples are labeled by random sampling, PT4AL<cit.> and our proposed BAL on CIFAR-10. In cycle 0, features of different categories are almost mixed evenly when randomly sampled. PT4AL improves results of the baseline, while our approach distinguishes different classes with clearer boundaries. In the last cycle, although both PT4AL and ours can separate different categories, features from the same class are gathered more tightly when performing BAL. In <ref>, we illustrate the category distribution of the uniformly-divided sub-pools in PT4AL <cit.> and adaptive sub-pools in BAL in each active learning cycle. It has shown that our sub-pools divide classes more evenly compared with PT4AL. Experiments of <cit.> have shown that an imbalance in the distribution adversely affects the results. Therefore, the balance of uncertainty and diversity assists in extracting more even data in each cycle, which favors active learning tasks.§ ABLATION EXPERIMENTS In this section, we demonstrate our exploration of several design options for BAL. The experiments are performed on CIFAR-10 with the same configuration in <ref>. In subsection <ref>, we discuss two ways to set labels for the self-supervising task; in subsection <ref>, we discuss two criteria for dividing batches, including SoC and DtC; in subsection <ref>, we discuss the order of SoC sorting, including from largest to smallest orfrom smallest to largest; and in subsection <ref>, we discuss sampling methods, including confidence, entropy, Kmeans clustering, and random sampling. Of all the methods, we chose the one that works best as the component of the SSAL framework, i.e. the red curve in <ref>.§.§ Self-supervised Task.The results presented in <ref> shed light on the effectiveness of using a straightforward rotation prediction pretext task for self-supervised pretext task for active learning. In contrast to the latest and more complex self-supervised learning approaches like SimCLR <cit.>, the relatively simpler task of predicting image rotations yields superior outcomes when incorporated into the active learning framework. This finding underscores the notion that, in certain scenarios, a less intricate pretext task can outperform more advanced alternatives, highlighting the significance of task selection in the design of active learning strategies.We define two rotation losses, the first loss function of which is defined in <ref>, where y_θ represents labels corresponding to angle θ. That is, when images rotate the same angle θ, their corresponding labels are the same (y_θ). Another way to define the pretext task is:L(x_i)=1/k∑_θ∈{0,90,180,270}L_CE(l_ss(t(x_i, θ)), y_i) where L_CE is the cross-entropy loss. t(x_i, θ) represents an image that rotates the input image x_i by θ degree. l_ss(·) represents output of self-supervised model F_ss(·). y_i represents labels corresponding to the i-th image x_i. That is, the labels of an image x_i (although rotates at different angles) are the same. As seen from <ref>, labels according to rotation angles perform better. §.§ Metric.In <ref>, we propose two criteria for sorting the sub-pools {P_U^i}_i=1^I, namely our proposed CDD and a naive baseline, the distance to the nearest cluster center. Results are shown in <ref>, which proves that CDD works better. We also compare CDD with another indicator, loss of pretext task, of the current SOTA, PT4AL <cit.>. CDD outperforms it by 1.36%. §.§ Sorting.When sorting CDDs, we look at two different sorting strategies, that is, in ascending or descending order. For both strategies, in the first cycle, the K samples are at the top of the sorted samples. As shown in <ref>, the performance in ascending order performs better. Thus it is better to learn difficult samples first. What's more, the results of the two strategies tend to be consistent when λ is larger. It shows these sorting methods have a more significant impact on data initialization. §.§ Sampling.We explore four different sampling methods, including (i) a confidence-based sampler, which selects data with the lowest posterior probability, (ii) an entropy-based sampler, which selects data with the highest entropy, (iii) a clustering-based sampler, which K-means clusters features and samples the data closest to cluster centers, and (iv) a random sampler. As shown in <ref>, the confidence-based sampler performs the best.§ DISCUSSIONIn this paper, we concentrate on providing a comprehensive experimental validation in the realm of image-level classification tasks. In this scene, we have observed that simple rotation prediction methods can outperform more advanced pretext tasks such as SimCLR <cit.>. On the other hand, for pixel-level tasks like segmentation, we consider pixel-level reconstruction tasks like MAE <cit.> and BEiT <cit.> more suitable. Nevertheless, these tasks typically are performed on Transformer <cit.> frameworks, which differ from the typical benchmarks and previous approaches, and may lead to unfair comparison. Due to these constraints, we do not further explore them in this paper. However, we are confident that the concepts of Cluster Distance Difference and balancing novelty and diversity proposed in this work hold potential for further extension to various downstream tasks and real-world applications. Based on the aforementioned ideas, we will continuously dedicate ourselves to further expanding its applicability to a broader spectrum of downstream tasks and real-world applications. We hope that these insights will also inspire future researchers to make further advancements in active learning. § CONCLUSIONIn this paper, we utilize the K-means clustering on features obtained by self-supervised learning. Then we design a diversity-based indicator, the cluster distance difference (CDD), which leverages the information of the inter-samples relationship to benefit downstream tasks. Furthermore, we propose the Balancing Active Learning (BAL) framework to balance diverse data with uncertain data by proposed adaptive sub-pools. Our approach surpasses all previous active learning methods in commonly-acknowledged benchmarks by significant margins. At last, we experimentally verified that when labeling 80% samples, the performance of the current SOTA deteriorates while our proposed BAL remains. It proves that BAL has the potential to save budgets as well as reach comparative performance. IEEEtran[ < g r a p h i c s > ]Jingyao Li received the B.Eng. degree from Xi'an Jiaotong University. She is currently a Ph.D. student at Department of Computer Science and Engineering of the Chinese University of Hong Kong (CUHK), under the supervision of Prof. Jiaya Jia. Her research interests include self-supervised learning, knowledge distillation and out-of-distribution detection. [ < g r a p h i c s > ] Pengguang Chen received the B.Eng. degree in Computer Science from Nanjing University and the Ph.D. degree from the Chinese University of Hong Kong (CUHK), under the supervision of Prof. Jiaya Jia. He is currently a researcher in SmartMore. He serves as a reviewer for CVPR, ICCV, ECCV, TPAMI. His research interests include neural architecture search, self-supervised learning, knowledge distillation and semantic segmentation. [ < g r a p h i c s > ]Shaozuo Yu is a Ph.D. student at Department of Computer Science and Engineering of the Chinese University of Hong Kong. He served as a program chair of the workshop and challenge on “Out-of-Distribution Generalization in Computer Vision” at ECCV’22. He served as a reviewer for CVPR, Neurips, and ICML. His research interests include multimodality, generative models, and robust vision.[ < g r a p h i c s > ]Shu Liu now serves as Co-Founder and Technical Head in SmartMore. He received the BS degree from Huazhong University of Science and Technology and the PhD degree from the Chinese University of Hong Kong. He was the winner of 2017 COCO Instance Segmentation Competition and received the Outstanding Reviewer of ICCV in 2019. He continuously served as a reviewer for TPAMI, CVPR, ICCV, NIPS, ICLR and etc. His research interests lie in deep learning and computer vision.He is a member of IEEE.[ < g r a p h i c s > ]Jiaya Jia received the Ph.D. degree in Computer Science from Hong Kong University of Science and Technology in 2004 and is currently a full professor in Department of Computer Science and Engineering at the Chinese University of Hong Kong (CUHK). He assumes the position of Associate Editor-in-Chief of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and is in the editorial board of International Journal of Computer Vision (IJCV). He continuously served as area chairs for ICCV, CVPR, AAAI, ECCV, and several other conferences for the organization. He was on program committees of major conferences in graphics and computational imaging, including ICCP, SIGGRAPH, and SIGGRAPH Asia. He is a Fellow of the IEEE. | http://arxiv.org/abs/2312.15944v1 | {
"authors": [
"Jingyao Li",
"Pengguang Chen",
"Shaozuo Yu",
"Shu Liu",
"Jiaya Jia"
],
"categories": [
"cs.LG",
"cs.CV"
],
"primary_category": "cs.LG",
"published": "20231226081446",
"title": "BAL: Balancing Diversity and Novelty for Active Learning"
} |
=24pt Four dimensional topological supergravities from transgression field theory Patrick Concha,^1,2,[[email protected]] Fernando Izaurieta,^3,[[email protected]] Evelyn Rodríguez,^1,2,[[email protected]] and Sebastián Salgado^4,[[email protected]] ^1Departamento de Matemática y Física Aplicadas, Universidad Católica de la Santísima Concepción, Alonso de Ribera 2850, Concepción, Chile ^2Grupo de Investigación en Física Teórica, GIFT,Concepción, Chile. ^3Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián,Lientur 1457, Concepción 4080871, Chile ^4Sede Esmeralda, Universidad de Tarapacá,Av. Luis Emilio Recabarren 2477, Iquique, Chile 20pt In this work, we propose a four-dimensional gauged Wess-Zumino-Witten model, obtained as a dimensional reduction from a transgression field theory invariant under the 𝒩=1 Poincaré supergroup. For this purpose, we consider that the two gauge connections on which the transgression action principle depends are given by linear and non-linear realizations of the gauge group respectively. The field content of the resulting four-dimensional theory is given by the gauge fields of the linear connection, in addition to a set of scalar and spinor multiplets in the same representation of the gauge supergroup, which in turn, correspond to the coordinates of the coset space between the gauge group and the five-dimensional Lorentz group. We then decompose the action in terms of four-dimensional quantities and derive the corresponding equations of motion. We extend our analysis to the non- and ultra- relativistic regime. 162mm0.4pt 162mm0.4pt § INTRODUCTION In the last decades, several gravitational theories have been introduced as alternatives to General Relativity. The most general theory that can be formulated in arbitrary dimensions and fulfill the fundamental requirements of invariance under diffeomorphisms, lead to equations of motion of second degree in the metric tensor and preserve the conservation law of the energy-momentum tensor is known as the Lovelock theory <cit.>. The Lovelock Lagrangian density is a sum over all the possible combinations between the Lorentz curvature depending on the spin connection, and the vielbein that codifies the metricity. In three and four dimensions, the Lovelock theory reproduces General Relativity with positive, negative, or zero cosmological constant depending on the values of its arbitrary constants <cit.>. A case of special interest is obtained when the constants of the sum in the Lovelock Lagrangian are fixed such that the theory presents the maximum number of degrees of freedom <cit.>. In that case, the three-dimensional Lovelock Lagrangian becomes proportional to the Chern–Simons (CS) three-form of the AdS group <cit.>, which has a topological origin and is, up to boundary terms, invariant under the gauge transformations induced by the AdS group.Such a feature allows us to formulate General Relativity as a topological gauge theory in three dimensions. However, the same does not occur in four dimensions, since CS forms exist only in odd dimensions. Moreover, the CS form of the AdS group does not lead to General Relativity in dimensions higher than three in any regime. With the purpose of formulating even-dimensional gravity theories, especially in four dimensions, A. H. Chamseddine proposed a topological gravity depending on the same variables than the Lovelock theory, in addition to a scalar multiplet in the same representation of the gauge group<cit.>.From a mathematical point of view, CS forms appear in the study of topological invariant densities that only exist in even dimension and, as a consequence of the Poincaré lemma, allow the existence odd-dimensional secondary forms that inherit their gauge invariance properties. These forms known as transgression forms, become CS forms as locally defined particular cases <cit.> . Thus, both transgression and CS forms are often used as Lagrangian densities for odd-dimensional gauge theories. In contrast to CS theories, transgression field theories depend on two gauge connections and their Lagrangian densities are globally defined differential forms that, as well as the topological densities in which they originate, are fully gauge invariant. In addition to be useful in the construction of Lagrangian densities, transgression forms can naturally induce a dimensional reduction that allows to formulate even-dimensional gauge invariant theories without the need of breaking the gauge covariance. These are known gauged Wess–Zumino–Witten (gWZW) models <cit.> and, as well as the even-dimensional topological gravity proposed by Chamseddine, include a scalar multiplet in the gauge group representation as part of the fundamental field content. Indeed, it was shown in <cit.> that Chamsedine's topological gravities can be obtained as gWZW models of the Poincaré group in arbitrary even dimensions. Furthermore, in ref.<cit.>, these results were generalized to the Maxwell algebra and the Poincaré superalgebra in three dimensions, obtaining fully gauge invariant (1+1)-dimensional theories for gravity and supergravity respectively. Moreover, in refs. <cit.>, it was studied the relation between four-dimensional gWZW models and general relativity.The existence of abstract even-dimensional gWZW models motivates us to study a four-dimensional gauge invariant supergravity theory that emerges from considering a supersymmetric extension of the five-dimensional Poincaré supergroup as gauge group. Moreover, due to the well-known relation between the Poincaré group and the Galilei and Carroll groups <cit.>, we aim the purpose of studying the non- and ultra- relativistic regimes of the resulting theory and thus to obtain the corresponding superalgebras and gWZW action principles.This paper is organized as follows: In section <ref>, we consider a brief introduction to the SW-GN formalism and the gWZW model. In section <ref>, we study the non-linear realization of the five-dimensional Poincaré superalgebra. In section <ref> and <ref>, we derive the four-dimensional gWZW action invariant under the aforementioned Poincaré supergroup, and study the dynamics of the resulting theory. In section <ref>, we consider the non-relativistic limit of the Poincaré superalgebra and derive the corresponding gWZW non-relativistic action principle. In section <ref> we perform the same analysis for the ultra-relativistic limit of the theory. Section <ref> contains our final conclusions.§ NON-LINEAR REALIZATIONS AND GWZW ACTIONS In this section we briefly review the Stelle-West-Grignani-Nardelli (SW-GN) formalism and the gauged Wess-Zumino-Witten (gWZW) models. §.§ The SW-GN formalism The Stelle-West-Grignani-Nardelli formalism makes use of non-linear realizations of Lie groups in the construction of gauge invariant action principles <cit.>. Thus, the gauge symmetry of a physical theory can be extended from a stability group, to a higher-dimensional group that contains it as a subgroup. Let us consider a Lie group G with Lie algebra 𝒢, and a subgroup H⊂ G with Lie algebra ℋ as stability subgroup. We denote as { V_i} _i=1^ H to the basis of ℋ and as { T_l} _l=1^ G- H the to the set of generators of the remaining subspace. We assume that the basis can be chosen such that the generators T_l form a representation of the stability subgroup. Therefore, the Lie products between the vectors of the introduced basis satisfy [ V,T] ∽ T, i.e. these products are linear combinations of T_l. An arbitrary group element g can be decomposed in terms the generators of the subgroup and the remaining subspace asg=e^ξ ^lT_lh ,where h∈ H is a group element defined by the action of the group on the zero-forms ξ ^l which, in turn, play the role of coordinates that parametrize the coset space G/H. From eq. (<ref>), it follows that the action of an arbitrary element g_0∈ G on e^ξ ^lT_l can be also split as g_0e^ξ ^lT_l=e^ξ ^' lT_lh_1 .Eq. (<ref>) allows to obtain the non-linear functions ξ ^'=ξ ^'( g,ξ) and h_1=h_1( g,ξ) . By considering that the transformation law ξ→ξ ^' is described by the variation δ, and by choosing the group element g_0 such that ( g_0-1) is infinitesimal, eq. (<ref>) leads to <cit.> e^-ξ ^lT_l( g_0-1) e^ξ ^lT_l-e^-ξ ^lT_lδ e^ξ ^lT_l=h_1-1 .Since g-1 is infinitesimal, h_1-1 is a vector of H.Let us now consider the case in which g_0=h_0 belongs to the stability subgroup. In this case, eq. (<ref>) becomes e^ξ ^' lT_l=( h_0e^ξ ^lT_lh_0^-1) h_0h_1^-1 ,and since the Lie product [ V_i,T_l] is proportional to T_l, one gets h_0=h_1 and the transformation law becomes linear: e^ξ ^' lT_l=h_0e^ξ ^lT_lh_0^-1 .On the other hand, if we consider g_0=e^ξ _0^lT_l, eq. (<ref>) becomes e^ξ ^' lT_l=e^ξ _0^lT_le^ξ ^lT_lh^-1 ,which is a non-linear transformation law for ξ.Let us now consider a one-form gauge connection A taking values on 𝒢 and an action principle S=S[ A] with gauge invariance under the transformations of the stability subgroup ℋ but not under those along the generators of the coset space. Under the action of an arbitrary group element g, the gauge connection transforms as<cit.> A⟶ A^'=g^-1dg+g^-1Ag .We split μ into its contributions belonging in h and the coset space as A=a+ρ, with a=a^lT_l and ρ =ρ _iV_i. Moreover, we introduce a group element z=exp( ξ ^lA_l) and define the non-linear gauge connectionA^z=z^-1dz+z^-1Az .The functional form of A^z is given by a large gauge transformation of A that non-linearly depends on the zero-forms ξ ^l and their derivatives. However, in the SW-GN formalism A^z is interpreted as the fundamental field of a gauge theory and therefore, both A and ξ will change under the action of the gauge group. As before, we split the contributions to the non-linear connection as A^z=v+p withp= p^l( ξ ,dξ) T_l , v= v^i( ξ ,dξ) V_i .It is possible to prove that, under the transformation δ generated by the action of the group, the transformation laws for p and v are given byp ⟶ p^'=h_1^-1ph_1 ,v ⟶ v^'=h_1^-1vh_1+h_1^-1dh_1 ,i.e., when acting with a group element belonging to the coset space, the non-linear one-forms p and v transform as a tensor and as a connection respectively. These transformations are linear but the group element is now a function of the parameters h_1=h_1( ξ _0,ξ). From the transformations laws in eq. (<ref>), it follows that the non-linear gauge connection transforms in the same way that under the action of the stability subgroup and the coset space. Therefore, an action principle defined as a functional of A whose gauge symmetry is described by the stability subgroup, becomes invariant under the entire group G whenA is replaced by A^z. The original non-invariance of the action principle is thus compensated by the transformation law of the gauge parameters ξ. §.§ gWZW models Let us consider two independent gauge connections A_1 and A_2 evaluated in the same gauge algebra. The transgression ( 2n+1) -form corresponding to both gauge connections is defined asQ_A_2← A_1^( 2n+1) =( n+1) ∫_0^1⟨( A_2-A_1) F_t^n⟩ ,where ⟨⟩ denotes the symmetrized trace along the generators of the Lie algebra, and F_t is the gauge curvature associated to the homotopic gauge connection A_t=A_1+t( A_2-A_1). Transgression forms are globally defined and fully invariant under the transformations of the gauge group. CS forms emerge as particular cases of transgression forms, by locally setting one of the gauge connections as vanishing. Thus, the CS form corresponding to a gauge connection A is locally defined asQ_A← 0^( 2n+1) =( n+1) ∫_0^1⟨ AF_t^n⟩ ,where the homotopic gauge connection takes form A_t=tA. Furthermore, by applying the Cartan homotopy formula, it is possible to prove that a general transgression form can be written in terms of two CS forms and a total derivative, as follows <cit.>Q_A_2← A_1^( 2n+1) =Q_A_2← A_0^( 2n+1) -Q_A_1← A_0^( 2n+1) -Q_A_2← A_1← A_0^( 2n).The 2n-form inside the exterior derivative is explicitly given as the following integral: Q_A_2← A_1← A_0^( 2n) =n( n+1) ∫_0^1dt∫_0^tds⟨( A_2-A_1) ( A_1-A_0) F_st^n-1⟩ .where F_st is the gauge curvature associated to the homotopic gauge fieldA_st=A_0+t( A_1-A_0) +s( A_2-A_1),which depends on two parameters t and s taking values between 0 and 1 . For details on the use of the Cartan homotopy formula and the homotopy operator in this context, see refs. <cit.>.Let us now consider two gauge connections A and A^z, related by the gauge transformation A^z=z^1( d+A) z, where z=exp( ξ) is an element of the gauge group and ξ a zero-form mutiplet in the same representation of the Lie algebra. From eq. (<ref> ), it follows that the transgression form associated to A and A^z can be written in terms of the difference between their corresponding CS forms. Let us now introduce a homotopic gauge field A_t=tA which takes values between 0 and A, as the parameter t takes values between 0 and 1. The transformed connection, obtained from A_t, denoted by ( A_t) ^z, and its corresponding gauge curvature( F_t) ^z are given by[ Note that in general ( A_t) ^z≠( A^z) _t]( A_t) ^z = z^-1tAz+z^-1dz ,( F_t) ^z = z^-1F_tz=z^-1( tF+( t^2-t) A^2) z .These homotopic quantities verify( A_0) ^z = z^-1ν z=z^-1dz, ( A_1) ^z=A^z ,( F_0) ^z = 0,( F_1) ^z=F^z .By applying the Cartan homotopy formula once again, it is possible to prove that the CS forms corresponding to the pure gauge connection z^-1dz and the transformed gauge connections A^z are related by the following equationQ_A^z← 0^( 2n+1) -Q_z^-1dz← 0^( 2n+1) =( k_01d+dk_01) Q_( A_t) ^z← 0^( 2n+1),with k_01=∫_0^1ℓ _t, where ℓ _t is the homotopy operator defined by the following action on A_t and F_t:ℓ _tA_t=0, ℓ _tF_t=dt∂/ ∂ tA_t .By directly applying eq. (<ref>), one finds that the first term in the right side of (<ref>) is given byk_01dQ_( A_t) ^z← 0^( 2n+1) =Q_A← 0^( 2n+1),so that, eq. (<ref>) allows to write the difference between two CS forms related by means of a gauge connection asQ_A^z← 0^( 2n+1) -Q_A← 0^( 2n+1) =Q_z^-1dz← 0^( 2n+1) +d α _2n( A,z) ,where we introduceα _2n( A,z) =k_01Q_( A_t) ^z← 0^( 2n+1).Notice that the first term in the r.h.s. of eq. (<ref>) is the CS form corresponding to the pure gauge connection z^-1dz, which is explicitly given by Q_z^-1dz← 0^( 2n+1) =( -1) ^n n!( n+1) !/( 2n+1) !⟨( z^-1 dz) ^2n+1⟩ .Thus, by virtue of eq. (<ref>), it is possible to write down the transgression form corresponding to A and A^z in terms of the pure gauge connection and a total derivativeQ_A^z← A^( 2n+1) =Q_z^-1dz← 0^( 2n+1) +d( α _2n( A,z) -Q_A^z← A← 0^( 2n) ) .Given a gauge group, the so-called gWZW action is defined as the boundary action that appears from the transgression action in accordance with the Stoke's theorem S_gWZW[ A] = κ∫_MQ_A^z← A^( 2n+1) = κ∫_∂ Mα _2n( A,z) -Q_A^z← A← 0^( 2n).Since the transgression Lagrangian is odd-dimensional, the gWZW action principle is always even-dimensional. As it happens in the SW-GN formalism, the zero-forms ξ are not longer interpreted as the parameters of a symmetry transformation but as physical fields with a topological origin. However, in contrast with the gauge invariant action principles that are obtained in the SW-NG formalism, gWZW action principles are not exclusively functionals of the non-linear gauge fields, but of the linear ones and the zero-form multiplets.§ 𝒩=1 POINCARÉ SUPERGRAVITY§.§ Chern–Simons supergravity The construction of a four-dimensional gWZW model requires the five-dimensional CS action as starting point. We first consider the 𝒩=1 supersymmetric extension of the five-dimensional Poincar é algebra 𝔲( 4|1), which is spanned by the set of generators { J_AB,P_A,K,Q^α,Q̅_α} where Q̅_α and Q^α are independent Dirac spinors. Capital latin letters denote five-dimensional Lorentz indices taking values as A=0,… 4, while Greek letters denote spinor indices taking values as α =1,… 4. In the chosen basis, the (anti)conmutation relations between the introduced generators are given by <cit.>[ J_AB,P_C]= η _BCP_A-η _ACP_B ,[ J_AB,J_CD]= η _BCJ_AD+η _ADJ_BC-η _ACJ_BD-η _BDJ_AC ,[ J_AB,Q^α]= -1/2( Γ _AB) _ β^αQ^β ,[ J_AB,Q̅_α]= 1/2( Γ _AB) _ α^βQ̅_β ,{ Q^α,Q̅_β} = 2( Γ ^A) _ β^αP_A-4iδ _β^αK ,where the metric signature is chosen as η ^AB=diag( -,+,+,+,+). This superalgebra allows an inner invariant rank-3 product with the following components:⟨ KJ_ABJ_CD⟩ = -i/4( η _ACη _BD-η _BCη _AD) , ⟨ J_ABJ_CDP_E⟩ = 1/2ϵ _ABCDE ,⟨ Q^αJ_ABQ̅_β⟩ = -( Γ _AB) _ β^α .We gauge the algebra by considering a one-form gauge connection A with non-vanishing gauge curvature F=dA+1/2[A,A], to whose components we denote A= h^AP_A+1/2ω ^ABJ_AB+bK+ψ̅_αQ^α-Q̅_αψ ^α, F= 𝒯^AP_A+1/2ℛ^ABJ_AB+F_bK+ ℱ̅_αQ^α-Q̅_αℱ^α,The components of the gauge curvature are given explicitly by 𝒯^A = dh^A+ω _ C^Ah^C-2ψ̅ _α( Γ ^A) _ β^αψ ^α ,ℛ^AB = dω ^AB+ω _ C^Aω ^CB , F_b = db+4iδ _β^αψ̅_αψ ^α ,ℱ̅_α = 𝒟ψ̅_α≡d ψ̅_α-1/4ω ^ABψ̅_β( Γ _AB) _ α^β ,ℱ^α = 𝒟ψ≡dψ ^α+ 1/4ω ^AB( Γ _AB) _ β^αψ ^β ,where 𝒟 denotes the covariant derivative defined with respect to the five-dimensional spin connection ω ^AB. By using the subspace separation procedure (see refs. <cit.>), it is possible to write down the five-dimensional CS Lagrangian in a convenient way:ℒ_CS( A) =κ( 1/4ϵ _ABCDEℛ^ABℛ^CDh^E+i/4ℛ^AB ℛ_ABb-( ψ̅ℛ^ABΓ _AB𝒟 ψ +𝒟ψ̅ℛ^ABΓ _ABψ) ) ,where κ is a constant. §.§ Non-linear realization In order to perform the dimensional reduction, it is necessary to introduce a non-linear realization of the gauge supergroup. We therefore consider a second gauge connection A^z related with A by means of the following large gauge transformationA^z=z^-1( d+A) z.Here, z is an element of the gauge group, specifically in the coset space 𝒢_5/SO( 4,1). Taking in account the following decomposition of the Poincaré superalgebraL_0 = { J_AB} ,L_1={ J_AB,P_A} ,L_3={ J_AB,P_A,K} , L_4 = { J_AB,P_A,K,Q̅_α} , L_5={ J_AB,P_A,K,Q̅_α} ,we can express A^zby using the following gauge group elementz=z_χ̅z_χz_φz_ϕ=e^-χ̅_αQ^αe^Q̅_βχ ^βe^-φ Ke^-ϕ ^AP_A ,where φ and ϕ ^A are zero-forms, and where χ and χ̅_α are Dirac spinors zero-forms. In order to explicitly write down for A^z in terms of the components of A and the parameters of the gauge transformation, we use the following identities <cit.>e^Xδ e^-X = ( 1-e^x) /X∧δ X , e^XYe^-X = e^X∧ Y ,with the notation X∧ Y=[ X,Y] and where δ is any variation. By directly and successively applying in the (anti)commutation relations of the gauge algebra into eq. (<ref>), we obtain the following transformed gauge fieldA^z=V+W+B̃+Ψ̅-Ψ ,where each component is given byV= V^AP_A=( h^A-𝒟ϕ ^A+2ψ̅Γ ^Aχ -2𝒟χ̅Γ ^Aχ -2χ̅Γ ^Aψ) P_A ,W= 1/2W^ABJ_AB=1/2ω ^ABJ_AB, B̃ = BK={ b-dφ +4i( ( 𝒟χ̅) χ -ψ̅χ +χ̅ψ) } K ,Ψ̅ = Ψ̅Q=( ψ̅-𝒟χ̅) Q ,Ψ = Q̅Ψ =Q̅( ψ -𝒟χ) .The non-linear realization of the gauge algebra allows the construction of invariant action principles. In fact, the five-dimensional standard supergravity theory, whose Lagrangian functional includes the Einstein–Hilbert and Rarita–Schwinger terms becomes invariant under the Poincaré superalgebra when one identifies the non-linear field V^A as the fünfbein field associated to the metric tensor of the supergravity theory, and Ψ ^α and Ψ̅_α as the spin 3/2 fields.§ FOUR-DIMENSIONAL POINCARÉ SUPERGRAVITY In order to write down the transgression form depending on both gauge connections A and A^z, let us recall (<ref>). By inspection of the (anti)commutation relations and invariant tensors of the gauge algebra, it follows that, in this case, the pure gauge connection z^-1dz has no components along the Lorentz rotation generators. Therefore, the pure gauge contribution to the transgression form, vanishes for any gauge parameter lying in 𝔲( 4|1) /𝔰𝔬( 4,1)Q_z^-1dz← 0^( 5) =0 ,As a consequence, the transgression form Q_A^z← A^( 5) is always exact and, according eq. (<ref>), can be written as Q_A^z← A^( 5) =d( α _4( A,z) -Q_A^z← A← 0^( 4) ) .To find an explicit expression of this transgression, let us first consider eq. (<ref>) with the following choice of gauge connections:Q_A^z← A^( 5) =Q_A^z←ω^( 5) -Q_A←ω^( 5) -d Q_A^z← A←ω^( 4),Notice that we now choose the intermediate connection as A̅=ω. A direct calculation shows that the difference between both transgression forms in the r.h.s. of eq. (<ref>) is given byQ_A^z←A̅^( 5)- Q_A←A̅ ^( 5)=d( 3/8ϵ _ABCDEℛ ^ABℛ^CDϕ ^E+3i/8ℛ^ABℛ _ABφ.. +3i⟨𝒟χ̅ℛψ -ψ̅ ℛ𝒟χ +𝒟χ̅ℛ𝒟χ⟩) -3/4ϵ _ABCDEℛ^ABℛ^CD(ψ̅Γ ^Eχ -𝒟χ̅Γ ^Eχ -χ̅ Γ ^Eψ)+3/2ℓℛ^ABℛ_AB( (𝒟χ̅) χ -ψ̅χ +χ̅ψ)-3( 𝒟^2χ̅ℛ^ABΓ _ABψ +ψ̅ℛ^ABΓ _AB𝒟^2χ -𝒟χ̅ ℛ^ABΓ _AB𝒟^2χ) ,where letters carrying no index inside the trace are vectors of the Lie superalgebra, i.e., ℛ=1/2ℛ^ABJ_AB, ψ = Q̅_αψ ^α, ψ̅=ψ̅_αQ^α, χ =Q̅_αχ ^α and χ̅=χ̅ _αQ^α. By using the Bianchi identities, we have that the second derivatives of the fermion zero-forms can be written in terms of the Lorentz curvature as 𝒟^2χ̅_α =-1/4χ̅_βℛ ^AB( Γ _AB) _ α^β ,𝒟^2χ ^α =1/4ℛ^AB( Γ _AB) _ β^αχ ^β .Then, by integrating by parts and using the properties of the gamma matrices in five dimensions, we find that the difference between both transgression forms is given by the following total derivativeQ_A^z←A̅^( 5) -Q_A←A̅ ^( 5) =d[ 3/8ϵ _ABCDEℛ ^ABℛ^CDϕ ^E+3i/8ℛ^ABℛ _ABφ +3i⟨𝒟χ̅ℛψ -ψ̅ ℛ𝒟χ +𝒟χ̅ℛ𝒟χ⟩] .On the other hand, the the boundary term in the r.h.s. of eq. (<ref>) can be obtained from eq. (<ref>) by setting n=2, A_2=A^z, A_1=A and A̅=ω. Consequently, the homotopic gauge field becomes A_st=ω +s( A^z-A) +t( A-ω). A direct integration leads toQ_A^z← A←A̅^( 4) =-3i⟨𝒟χ̅ℛψ -ψ̅ℛ𝒟χ⟩ .Then, by plugging in eqs. (<ref>) and (<ref>) into (<ref>), we finally obtain an explicit expression for the transgression form in terms of a total derivativeQ_A^z← A^( 5)= d{3/8 ϵ _ABCDEℛ^ABℛ^CDϕ ^E+3i/8 ℛ^ABℛ_ABφ.. +3[ 𝒟χ̅ℛ^ABΓ _AB( ψ +1/4𝒟χ) -( ψ̅-1/4 𝒟χ̅) ℛ^ABΓ _AB𝒟χ ] } .Therefore, the four-dimensional induced action is given byS= κ∫[ ϵ _ABCDEℛ^ABℛ ^CDϕ ^E+iℛ^ABℛ_ABφ.. -8( χ̅ℛ^ABΓ _AB𝒟ψ + 𝒟ψ̅ℛ^ABΓ _ABχ -1/2𝒟 χ̅ℛ^ABΓ _AB𝒟χ) ] .This action in analogue to the one found in ref. <cit.> for ( 1+1)-dimensional supergravity. It is invariant under the transformations of the five-dimensional Poincaré supergroup and it can be interpreted as a supersymmetric extension of the topological gravity introduced in ref. <cit.>, and alternatively found as a gWZW action in ref. <cit.> for the bosonic case. §.§ Decomposition of the action The obtained gWZW action from eq. (<ref>) is four-dimensional. However, it is a functional of the original five-dimensional field content of the transgression field theory. When gauging Poincaré or AdS supergroups, the gauge field associated to the translation operator is usually identified as the vielbein of the corresponding supergravity theory, and therefore it is considered that it carries the information about the metric in the resulting field equations. Notice that, at this point, we have not yet introduced a notion of metricity in the supergravity theory. Moreover, the field h^A has been removed from the functional in the dimensional reduction process and it is not longer present in the action principle in eq. (<ref>). This is a common feature of gWZW models originated in the gauging of space-time symmetries, and allows us to identify the some components of the five-dimensional spin connection as vierbein in a four-dimensional supergravity theory. Thus, the original gauge invariance under the five-dimensional Lorentz group is now interpreted as invariance under the four-dimensional de Sitter group. We therefore decompose the index A=( a,4) with a=0,1,2,3, and rename ω ^a4=-ω ^4a=e^a as vierbein one-form. The five-dimensional Lorentz curvature is also decomposed, as follows: ℛ^ab = dω ^ab+ω _ c^aω ^cb+ω _ 4^aω ^4b=R^ab-e^ae^b ,ℛ^a4 = De^a=T^a .Consequently, we split the action principle into its bosonic and fermionic sectors as S=S_B+S_F, being S_F the contribution depending on spinor fields in the r.h.s. of eq. (<ref> ). In terms of the previous decomposition, the bosonic sector of the action is given byS_B = κ∫[ -4( ϵ _abcdR^abe^c- 1/3ϵ _abcde^ae^be^c) Dϕ ^d. . +ϵ _abcd[ R^abR^cd-2R^abe^ce^d+e^ae^be^ce^d] ϕ ^4+iR^abR_abφ +2iT^ae_adφ] .In the same way, the fermionic sector of the action is given in terms of the four-dimensional quantities as followsS_F = κ∫ -8[ χ̅( R^ab-e^ae^b) Γ _ab𝒟ψ +2χ̅T^aΓ _aΓ𝒟ψ +𝒟ψ̅( R^ab-e^ae^b) Γ _abχ. . -1/2𝒟χ̅( R^ab-e^ae^b) Γ _ab𝒟χ -𝒟χ̅T^aΓ _aΓ𝒟χ] ,where the five-dimensional Lorentz covariant derivatives are given in terms of the four-dimensional ones according to[ From now on, we denote Γ ^5≡Γ.]𝒟ψ̅=Dψ̅-1/2e^aψ̅Γ _aΓ ,𝒟ψ =Dψ +1/2 e^aΓ _aΓψ . § DYNAMICS From now on, we will denote as ℒ_G to the bosonic Lagrangian four-form, and identify to the fermionic contribution as a matter Lagrangian, i.e.,ℒ_G = κ[ ϵ _ABCDEℛ^AB ℛ^CDϕ ^E+iℛ^ABℛ_ABφ] ,ℒ_M = -8κ[ χ̅ℛ^ABΓ _AB𝒟ψ +𝒟ψ̅ℛ^ABΓ _ABχ - 1/2𝒟χ̅ℛ^ABΓ _AB𝒟χ ] .Although we hold the writing in terms of the five-dimensional indices for convenience, it is important to recall that they describe a four-dimensional theory with ω ^AB packing the spin connection and vielbein forms, while ℛ^AB contains the Lorentz curvature and torsion. In these terms, we introduce a generalized spin form Σ _AB, such that the variation of ℒ_M with respect to ω ^AB is given byδ _ωℒ_M=-kδω ^AB∗Σ _AB ,with ∗ the Hodge dual operator and k a dimensional constant. The components of the generalized spin form are split into the four dimensional spin form and energy-momentum forms, as followsΣ _a4=𝒯_a,Σ _AB=σ _ab .Therefore, the field equations can be written asδ _ωℒ_G-kδω ^AB∗Σ _AB=0 .The variation of ℒ_G with respect to ω ^AB is given byδ _ωℒ_G=2κδω ^AB[ ϵ _ABCDEℛ^CD𝒟ϕ ^E+iℛ_AB dφ] .On the other hand, the field variation of the matter Lagrangian is given byδ _ωℒ_M = -8κ[ -𝒟χ̅δω ^ABΓ _AB𝒟ψ +χ̅δω ^ABΓ _AB𝒟^2ψ +1/4χ̅ℛ ^ABδω ^CDΓ _ABΓ _CDψ.-1/4δω ^CDψ̅ℛ^ABΓ _CDΓ _ABχ -𝒟^2ψ̅δω ^ABΓ _ABχ +𝒟ψ̅δω ^ABΓ _AB𝒟 χ +1/8δω ^CDχ̅ℛ^ABΓ _CDΓ _AB𝒟χ-1/8𝒟χ̅ℛ^ABδω ^CDΓ _ABΓ _CDχ -1/2𝒟^2χ̅δω ^ABΓ _AB𝒟χ -1/2𝒟χ̅δω ^ABΓ _AB𝒟^2χ ,where we have used the identitiesδ𝒟ψ̅ = -1/4δω ^ABψ̅ Γ _AB, δ𝒟ψ =1/4δω ^ABΓ _ABψ ,δ𝒟χ̅ = -1/4δω ^ABχ̅ Γ _AB, δ𝒟χ =1/4δω ^ABΓ _ABχ .By integrating by parts and plugging in the Bianchi identities𝒟^2χ̅ = -1/4ℛ^ABχ̅Γ _AB, 𝒟^2χ =1/4ℛ^ABΓ _ABχ ,𝒟^2ψ̅ = -1/4ℛ^ABψ̅Γ _AB, 𝒟^2ψ =1/4ℛ^ABΓ _ABψ ,we obtainδ _ωℒ_M = -8κδω ^AB[ 𝒟χ̅Γ _AB𝒟ψ +𝒟ψ̅ Γ _AB𝒟χ +ℛ_AB( ψ̅χ -χ̅ψ -1/2d( χ̅χ) ) .. +1/2ϵ _ABCDEℛ^CD( ψ̅ Γ ^Eχ -χ̅Γ ^Eψ -1/2𝒟(χ̅Γ ^Eχ) ) ] .Finally we have an expression for the dual spin form∗Σ _AB = 8κ/k[ 𝒟χ̅Γ _AB𝒟ψ +𝒟ψ̅Γ _AB𝒟χ + ℛ_AB( ψ̅χ -χ̅ψ -1/2d ( χ̅χ) ) .. +1/2ϵ _ABCDEℛ^CD( ψ̅ Γ ^Eχ -χ̅Γ ^Eψ -1/2𝒟(χ̅Γ ^Eχ) ) ] .The field equations coming from the variation of the action with respect to ω ^AB are therefore given by ε _AB=0 withε _AB = ϵ _ABCDEℛ^CD𝒟ϕ ^E+i ℛ_ABdφ -4[ 𝒟χ̅Γ _AB𝒟 ψ +𝒟ψ̅Γ _AB𝒟χ.. +ℛ_AB( ψ̅χ -χ̅ψ -1/2 d( χ̅χ) ) +1/2ϵ _ABCDE ℛ^CD( ψ̅Γ ^Eχ -χ̅Γ ^Eψ - 1/2𝒟( χ̅Γ ^Eχ) )] .or, equivalently in components, the field equations related with the independent variations δ e^a and δω ^ab are given by ε _a4 = -ϵ _abcdℛ^bc( Dϕ ^d+e^dϕ ^4) +iT_adφ -4[ 𝒟χ̅Γ _aΓ𝒟ψ +𝒟 ψ̅Γ _aΓ𝒟χ +T_a( ψ̅χ - χ̅ψ -1/2d( χ̅χ) ) .. -1/2ϵ _abcdℛ^bc( ψ̅Γ ^dχ -χ̅Γ ^dψ -1/2D( χ̅ Γ ^dχ) -1/2e^dχ̅Γ ^4χ)], ε _ab = ϵ _abcd4ℛ^cd( dϕ ^4-e^bϕ _b) -2ϵ _abcdT^c( Dϕ ^d+e^dϕ ^4) +iℛ_abdφ -4[ 𝒟χ̅Γ _ab𝒟ψ +𝒟ψ̅Γ _ab𝒟χ +ℛ_ab( ψ̅χ - χ̅ψ -1/2d( χ̅χ) ) .+1/2ϵ _abcdℛ^cd( ψ̅Γχ - χ̅Γψ -1/2d( χ̅Γχ) +1/2e_bχ̅Γ ^bχ). -ϵ _abcdT^c( ψ̅Γ ^dχ -χ̅ Γ ^dψ -1/2D( χ̅Γ ^dχ) -1/2e^d( χ̅Γχ) ) ] ,where D denotes the covariant derivative defined with respect to the four dimensional spin connection ω ^ab.§ NON-RELATIVISTIC LIMIT§.§ Chern–Simons theory Let us now consider a non-relativistic contraction of the gauge algebra (<ref>). We split Lorentz index A in the space-time components as A=( 0,I) with I=1,… ,4. Moreover, we rename and perform the following rescaling on the Poincaré superalgebra generators as in <cit.>P_0 ⟶ H P_I ⟶λ P_I , J_0I ⟶λ G_I , Q^α ⟶√(λ)Q^α ,Q̅ _α ⟶√(λ)Q̅_α , K ⟶λ K .When taking the limit λ→∞, the superalgebra (anti)commutation relations become[ J_IJ,P_K] = η _JKP_I-η _IKP_J ,[ G_I,H] = P_I ,[ J_IJ,J_KL] = η _JKJ_IL+η _ILJ_JK-η _IKJ_JL-η _JLJ_IK ,[ J_IJ,G_K] = η _JKG_I-η _IKG_J ,[ J_IJ,Q^α] = -1/2( Γ _IJ) _ β^αQ^β , [ J_IJ,Q̅_α] = 1/2( Γ _IJ) _ α^βQ̅_β ,{ Q^α,Q̅_β} = 2( Γ ^I) _ β^αP_I-4iδ _β^αK .The non-relativistic limit of the Poincaré superalgebra (<ref>) reproduces a supersymmetric extension of the Galilei algebra <cit.>. We now introduce a one-form gauge connection A and the corresponding gauge curvature F, to whose components we denoteA= τ H+h^IP_I+ω ^IG_I+1/2ω ^IJJ_IJ+bK+ ψ̅_αQ^α-Q̅_αψ ^α , F= T̂H+T̂^IP_I+R^IG_I+1/2ℛ ^IJJ_IJ+F_bK+ℱ̅_αQ^α-Q̅_α ℱ^α .The components of the new gauge curvature are explicitly given byT̂ = dτ ,T̂^I = D_ωh^I+ω ^Iτ -2ψ̅_α( Γ ^I) _ β^αψ ^α ,ℛ^I = D_ωω ^I ,ℛ^IJ = dω ^IJ+ω _ K^Iω ^KJ , F_b = db+4iδ _β^αψ̅_αψ ^α ,ℱ̅_α = D_ωψ̅_α,ℱ^α = D_ωψ ^α ,where D_ω is the covariant derivative with respect to the spatial spin connection ω ^IJ. At the level of the invariant tensor, one can check that the non-relativistic limit of the non-vanishing components (<ref>) reproduces⟨ J_IJJ_KLH⟩ = 1/2ϵ _IJKL ,with the convention ϵ _0IJKL=ϵ _IJKL. Then, the five-dimensional CS Lagrangian takes the formℒ_CS^NR=κ/4ϵ _IJKL ℛ^IJℛ^KLτ ,Note that, although the non-relativistic limit of the Poincaré superalgebra is a supersymmetric extension of the Galilei algebra, the CS Lagrangian does not lead to supergravity. This is a consequence of the fact that, in this limit, the invariant tensor of the algebra only carries non-zero components in the bosonic sector. As we shall see in the next section, the Carrollian limit preserves supergravity. In a future work, it would be interesting to explore the existence of a non-relativistic counterpart of Poincaré supergravity that preserves supersymmetry. §.§ gWZW model Let us now consider the non-relativistic limit of the gWZW action principle obtained in section <ref>. As we did in the relativistic case, in order to construct the transgression field theory, we introduce a secondary gauge connection A^z related with A by means of a gauge transformation. We denote the components of A^z and the supergroup parameters as follows:A^z = τ H+H^IP_I+W^IG_I+1/2W^IJJ_IJ+BK+Ψ̅ _αQ^α-Q̅_αΨ ^α ,z= e^-χ̅_αQ^αe^Q̅_βχ ^βe^-φ Ke^-ϕ ^IP_Ie^-ϕ H .The four dimensional action is reduced toℒ_gWZW^NR=κϵ _IJKLℛ^IJ ℛ^KLϕ ,where we rename ϕ ^0=ϕ. We now perform a second index decomposition; the spatial index of the five-dimensional theory is split as I=( i,4) where i=1,2,3 is the spatial index of the non-relativistic four-dimensional theory. We also decompose the non-relativistic spin connection and curvature as follows:ω ^I = ( ω ^i,τ) , ω ^IJ = ( ω ^ij,ω ^i4) ≡( ω ^ij,e^i) , ℛ^I = ( ℛ^i,T) ,ℛ^IJ = ( ℛ^ij,ℛ^i4) ≡( ℛ^ij,T^i) ,withℛ^ij = R^ij-e^ie^j ,T^i = de^i+ω _ k^ie^k,where R^ij=dω ^ij+ω _ k^iω ^kj is the𝔰𝔬( 3) curvature. Since h^A is not longer present in the theory, we interpret e^i as spatial vielbein. In this way, the Lagrangian density is written asℒ_gWZW^NR=4κϵ _ijkℛ^ij ℛ^k4ϕ =4κϵ _ijk( R^ij-e^ie^j) T^kϕ .As it happens in the non-relativistic five-dimensional CS theory, the resulting Lagrangian density is purely bosonic.§ ULTRA-RELATIVISTIC LIMIT§.§ Chern–Simons theory Let us now consider the ultra relativistic limit of the five-dimensional Poincaré superalgebra <cit.> . With this purpose, we consider again the space-time splitting of the Lorentz index A in the Poincaré superalgebra. We rename and rescale the generators asP_0 ⟶λ H ,J_0I ⟶λ G_I , K⟶λ K , Q^α ⟶√(λ)Q^α ,Q̅ _α ⟶√(λ)Q̅_α .The resulting ultra-relativistic superalgebra is obtained by taking the limit λ→∞, and is given by the following supersymmetric extension of the Carroll algebra <cit.> in five dimensions[ J_IJ,P_K] = η _JKP_I-η _IKP_J ,[ G_I,P_J] = η _IJH ,[ J_IJ,J_KL] = η _JKJ_IL+η _ILJ_JK-η _IKJ_JL-η _JLJ_IK ,[ J_IJ,G_K] = η _JKG_I-η _IKG_J ,[ J_IJ,Q^α] = -1/2( Γ _IJ) _ β^αQ^β ,[ J_IJ,Q̅_α] = 1/2( Γ _IJ) _ α^βQ̅_β ,{ Q^α,Q̅_β} = 2( Γ ^0) _ β^αH-4iδ _β^αK .Following the procedure used in the non-relativistic case, we now introduce a one-form gauge connection A and the corresponding gauge curvature F=dA+A^2. Since the ultra-relativistic algebra has the same number of generators as its non-relativistic analogue, we denote the components of A and F as they are given in eqs. (<ref>) and (<ref>) respectively. In this case, the components of the gauge curvature are given byT̂ = dτ +ω _Jh^J-2ψ̅_α( Γ ^0) _ β^αψ ^α ,T̂^I = D_ωh^I ,ℛ^IJ = dω ^IJ+ω _ K^Iω ^KJ ,ℛ^I = D_ωω ^I , F_b = db+4iδ _β^αψ̅_αψ ^α ,ℱ̅_α = D_ωψ̅_α ,ℱ^α = D_ωψ ^α .The CS Lagrangian invariant under the transformation of this algebra is given byℒ_CS^UR = κϵ _IJKL( 1 /4ℛ^IJℛ^KLτ +ℛ^IJℛ ^Kh^L)+iκ/4ℛ^IJℛ_IJb-κ( ψ̅ℛ^IJΓ _IJD_ωψ +D_ω ψ̅ℛ^IJΓ _IJψ) .In contrast with the non-relativistic Lagrangian, the invariant tensor still carry non-vanishing components in the fermionic sector after taking the limit and, as a consequence, the supergravity is preserved in the ultra-relativistic regime. §.§ gWZW model Let us now consider the ultra-relativistic limit of the four dimensional gWZW Lagrangian. As before, we introduce a group element z and a non-linear gauge field A^z, obtained from A through a large gauge transformation. We denote to the components of z and A^z as in eq. (<ref>) respectively. In this case, the four-dimensional Lagrangian is reduced toℒ_gWZW^UR = κ[ ϵ _IJKL ℛ^IJℛ^KLϕ +4ϵ _IJKLℛ^IJ ℛ^Kϕ ^L+iℛ^IJℛ_IJφ. . -8( χ̅ℛ^IJΓ _IJD_ωψ +D_ωψ̅ℛ^IJΓ _IJχ -1 /2D_ωχ̅ℛ^IJΓ _IJD_ωχ) ] .As before, we shall decompose the five-dimensional spatial index, following the structure and change of notation of eqs. (<ref>) and (<ref>). In these terms, the four-dimensional ultra-relativistic gWZW Lagrangian is given byℒ_gWZW^UR = κ[ 4ϵ _ijk ℛ^ijT^kϕ +8ϵ _ijkT^iℛ^jϕ ^k+4ϵ _ijkℛ^ijℛ^kρ -4ϵ _ijk ℛ^ijTϕ ^k+iℛ^ijℛ_ijφ +2iT^iT_iφ. -8( χ̅ℛ^ijΓ _ijD_ωψ +2 χ̅T^iΓ _iΓD_ωψ +D_ω ψ̅ℛ^ijΓ _ijχ +2D_ωψ̅ T^iΓ _iΓχ. . . -1/2D_ωχ̅ℛ ^ijΓ _ijD_ωχ -D_ωχ̅ T^iΓ _iΓD_ωχ) ] ,with the convention ϵ _1234=1 and ϕ ^4=ρ.§ CONCLUDING REMARKS In this article, we have obtained a four-dimensional theory for supergravity. The construction of the action makes use of the five-dimensional 𝒩=1 Poincaré supergroup as gauge group, a one form gauge connection evaluated on it, and a transformed gauge connection in which the gauge parameter is evaluated in the coset space resulting between the five-dimensional Poincaré superalgebra and its Lorentz algebra. The existence of such transformed connection is enough to formulate five-dimensional standard supergravity as a gauge invariant theory of the Poincaré supergroup by means of the SW-GN formalism. This is carried out by interpreting the new connection as the fundamental field of a supergravity theory (instead of an equivalent gauge connection associated to the original one by means of a symmetry transformation), which in this case means to consider the transformed one-form V^A associated to the translation generators as fünfbein. Furthermore, we have also introduced a transgression field theory that leads to a gWZW model. As it happens in the SW-GN formalim, the field content of such model is given by the original one-form gauge connection in addition to the parameters zero-forms. The resulting action principle is fully gauge-invariant under the Poincaré supergroup and corresponds to a supersymmetric extension of the even-dimensional topological gravity introduced by A. H. Chamseddine in ref.<cit.>. By considering the above-mentioned gWZW supergravity theory as a starting point, we have studied two specific regimes, namely, the non- and ultra-relativistic limits. For the first one, we have found a 𝒩=1 supersymmetric extension of the Galilei algebra in five dimensions. Moreover, we derived a non-linear connection that allows the formulation of five-dimensional gauge invariant theories by means of the SW-GN formalism, and also obtained the corresponding four-dimensional gWZW model. We have found that, in this regime, the resulting gWZW model is not supersymmetric. Thus, we have obtained a non-relativistic gravity theory in four dimensions, which is invariant under the bosonic sector of the mentioned non-relativistic algebra. On the other hand, for the second case, we have found a five-dimensional supersymmetric extension of the Carroll algebra, obtained a non-linear realization of it, and finally derived and a four-dimensional gWZW action principle that, in contrast with the non-relativistic case, preserves supergravity.It would be interesting to consider the semigroup expansion method <cit.> in order to derive a non-relativistic five-dimensional supergravity action. As it was shown in <cit.>, some semigroups are useful to derive non-relativistic algebras with a non-degenerate invariant tensor. It would be worth exploring if such particularity can be extended in presence of supersymmetry. In particular, following the examples obtained in three spacetime dimensions <cit.>, one could explore the construction of a trully supersymmetric gravity action in five spacetime dimensions along its dimensional reduction. It would also be interesting to extend the study and the analysis done in this work to the case of a non-vanishing cosmological constant in the starting supergravity theory in five-dimensions <cit.>. Another aspect that deserves to be studied is the generalization of our results to 𝒩-extended supergravities, with and without cosmological constant. Finally, it would be worth extending the analysis done along this work to the case of hypergravity, both in three and five spacetime dimensions <cit.>.§ ACKNOWLEDGEMENTS The authors would like to thank P. Salgado for enlightening discussions. P.C. acknowledges financial support from the National Agency for Research and Development (ANID) through Fondecyt grants No. 1211077 and 11220328. F.I. acknowledges financial support from ANID through Fondecyt grant 1211219. E.R. acknowledges financial support from ANID through SIA grant No. SA77210097 and Fondecyt grant No. 11220486 and 1231133. P.C. and E.R. would like to thank to the Dirección de Investigación and Vicerectoría de Investigación of the Universidad Católica de la Santísima Concepción, Chile, for their constant support. S.S. acknowledges financial support from Universidad de Tarapacá, Chile.utphys.bst | http://arxiv.org/abs/2312.16347v1 | {
"authors": [
"Patrick Concha",
"Fernando Izaurieta",
"Evelyn Rodríguez",
"Sebastián Salgado"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20231226223615",
"title": "Four dimensional topological supergravities from transgression field theory"
} |
Denotational semantics for languages for inference:semirings, monads, and tensorsCristina Matache University of Edinburgh [email protected] Sean Moss University of Oxford [email protected] Sam Staton University of Oxford [email protected] Ariadne Si Suo University of Oxford [email protected]========================================================================================================================================================================================================================================================== The Vera C. Rubin Observatory is slated to observe nearly 20 billion galaxies during its decade-long Legacy Survey of Space and Time. The rich imaging data it collects will be an invaluable resource for probing galaxy evolution across cosmic time, characterizing the host galaxies of transient phenomena, and identifying novel populations of anomalous systems. To facilitate these studies, we introduce a convolutional variational autoencoder trained to estimate the redshift, stellar mass, and star-formation rates of galaxies from multi-band imaging data. We train and test our physics-informed CVAE on a spectroscopic sample of ∼26,000 galaxies within z<1 imaged through the Dark Energy Camera Legacy Survey. We show that our model can infer redshift and stellar mass more accurately than the latest image-based self-supervised learning approaches, and is >100x faster than more computationally-intensive SED-fitting techniques. Using a small sample of Green Pea and Red Spiral galaxies reported in the literature, we further demonstrate how this CVAE can be used to rapidly identify rare galaxy populations and interpret what makes them unique.§ INTRODUCTIONAt the cornerstone of a range of diverse astrophysical domains sits the chemical and dynamical histories of galaxies. Aspects of these histories can be inferred piecewise from observations. From broad-band photometry, one can infer the underlying spectral energy distribution (SED); using morphological features, one can reconstruct a galaxy's merger/interaction history. Novel galaxy sub-populations have been discovered through detailed manual inspection along these axes; some prominent examples include `Green Pea galaxies', defined by their compact shapes and high star-formation rates <cit.>; and `passive Red Spirals'– spiral galaxies with minimal star formation, whose colors seem to contradict a morphology that is commonly tied to much more active galaxies <cit.>. Several of these sub-populations have been only identified through manual vetting by scores of human volunteers (e.g., <cit.>).The rise of deep convolutional neural networks, which allow for morphological classification of galaxies from image data <cit.>, and of simulation-based inference to estimate their underlying SED parameters from photometry <cit.>, have paved the way for low-latency galaxy analysis in the era of synoptic surveys. However, anomaly detection within these frameworks remains an open question: out-of-distribution events may be identified by flagging high reconstruction errors at test time <cit.>, but this approach can produce samples contaminated by non-astrophysical anomalies such as poorly-calibrated images and chip gaps.Here, we introduce a technique to guide the latent features of a convolutional variational auto-encoder (CVAE) toward the physical parameters of a galaxy. CVAEs are a class of neural network designed for compression. Variational AEs, in particular, typically employ the Kullback–Leibler (KL) divergence to enforce a continuous and probabilistic latent distribution characterized by a multivariate standard Normal. CVAEs have been extensively used in the literature for galaxy image reconstruction <cit.>. Here, we extend this methodology to disentangle the observational properties of a galaxy (e.g., its orientation) from its intrinsic properties in the latent space, and discover more meaningful anomalies. § DATA AND METHODOLOGYIn this section, we describe our methodology for designing and training a physics-informed CVAE. Our CVAE encodes a galaxy image into 5 latent dimensions, the first four corresponding to the orientation, redshift, stellar mass, and star-formation rate of the galaxy. The fifth dimension encodes any remaining structure not captured by the first four features.We select a sample of z<1 spectroscopic galaxies from both the LSST deep drilling fields catalogue from <cit.> (hereafter, the Z22 sample) and from GAMA DR4 <cit.>. These catalogues report the average star-formation rate and stellar mass of each galaxy obtained with the SED-fitting codesand , respectively, along with their associated uncertainties. We include the Z22 sample to provide training support at high redshift, and the GAMA catalogue to includes low-redshift objects for which rich morphological information is available. For galaxies that overlap with the Dark Energy Camera Legacy Survey (DECaLS) footprint, we then download 128x128 px DECaLS co-added images centered at the location of each galaxy in grz[<https://www.legacysurvey.org/decamls/>]. DECaLS data reaches a 5-σ depth of g∼24, making it a sufficient analog for upcoming Rubin data (g∼24.5[<https://www.lsst.org/scientists/keynumbers>]). We remove five galaxies that are missing images in at least one filter, leaving us with a combined sample of 26,126 galaxies.As in <cit.>, we pre-process our images in each filter b by applying the following normalization: x_b = tanh(sinh^-1(βx_raw,b/⟨ max(x_raw, b)⟩_b)) where β = 10 was chosen empirically to allow our image maxima to be well-distributed within [0, 1]. We then interpolate each image to 69x69 px to reduce training times.Next, we use thepackage to estimate the orientation of each galaxy. We stack the grz images and subtract the median background level from the stack, assuming a 10x10 px box size and a 3x3 px median filter. We then convolve each background-subtracted image with a 2D Gaussian filter and construct a catalog of all identified sources. The morphological features of each source are calculated by default, and we extract the orientation of the source whose centroid is closest to the center of the image. We then split our data into an 80% training sample and a 20% validation sample.To assess the utility of our CVAE for anomaly detection, we augment our validation set with a sample of 294 Red Spiral galaxies from <cit.> and the KISS sample of 13 Green Pea galaxies <cit.>. The Red Spiral galaxies were consolidated by the 160,000 volunteers of the Galaxy Zoo project. We construct our CVAE using the same architecture as in <cit.>[<https://github.com/jwuphysics/galaxy-autoencoders.>]. The network consists of three convolution encoding layers, three deconvolution decoding layers, and a latent dimension of 5. We train the network using a loss function consisting of three terms. The first is the reconstruction loss for an image of n pixels, the second is the KL divergence between the fifth latent feature and a standard univariate Normal, and the third is the mean squared error between the means of the first four latent distributions μ and the galaxy's physical properties: L = β_0 1/n∑_i=1^n (x_i - x̂_̂î)^2 +( σ_5^2 + μ_5^2 - log(σ_5) - 1) +β_1 ∑_k=1^4 1/σ_p_k^2 (p_k - μ_k)^2 where p is a vector containing the physical properties of the galaxy (orientation, spectroscopic redshift, and survey-reported stellar mass and star-formation rate) and σ_p are their reported 1-σ uncertainties (we assume an uncertainty of 0.1 rad for all calculated orientations). We leave the fifth latent dimension of our CVAE free to capture the remaining morphological information in each image. The coefficients β_0 and β_1 are hyperparameters; we have found through moderate tuning that setting β_0 = 10 and β_1 = 50 results in reasonable parameter estimates without significantly degrading the quality of reconstructed images. As a baseline comparison, we train a vanilla CVAE with the same architecture but excluding the physical term from the loss function (the KL-divergence is now calculated across all 5 latent features, which are unphysical). We train each network for 1000 epochs using the standardoptimizer <cit.> with a learning rate of 5×10^-6 and a batch size of 128 on two Nvidia A100 80GB GPUs, confirming convergence of the loss and saving the model weights at the epoch where the validation loss is minimized. Our code and trained models can be found at the associated Github repository for this work[<https://github.com/alexandergagliano/physics_CVAE>].§ RESULTS AND DISCUSSION The catalog-level and CVAE-estimated physical properties for the validation sample are compared in Fig. <ref>. The CVAE appears to successfully recover the physical parameters of galaxies in the validation set. The predictions are unbiased, with mean absolute deviations of at most 0.1, for star-formation rate. In addition, few catastrophic outliers are observed for most galaxy properties (the scatter is large for star-formation rate, but comparable to the error bars on publicly reported SED estimates). An exception is the orientation ϕ: because the parameter is periodic in nature (galaxies with orientations π/2 and -π/2 are indistinguishable), this feature is poorly reconstructed at high inclinations. To quantitatively evaluate the CVAE's performance on the validation set, we fit a linear model between each CVAE-inferred feature and its spectroscopically-derived counterpart. We report the R^2 value of these regression models for spectroscopic redshift and stellar mass in Table 1, alongside those reported for other image-based parameter inference methods from <cit.>. The values from <cit.> were calculated on a sample of ∼200k z<0.6 DECaLS galaxies, roughly an order of magnitude greater than the sample considered in this work. Despite the smaller sample available for training, our technique far outperforms the reported techniques (particularly those trained using self-supervised learning) for estimating these galaxy properties, and to higher redshifts (z<1). Our R^2 value for star-formation rate is 0.26; though unavoidable degeneracies exist at the photometric level, tighter constraints from SED fits would likely improve the CVAE's ability to infer this property in the future. Surprisingly, our network is able to predict reasonable estimates for the spectroscopic redshifts and stellar masses of the Red Spiral and Green Pea galaxies from images alone, despite their anomalous colors. The redshifts of the Green Peas are still systematically underestimated by the model, which skews the predicted star-formation rates downward. We next construct a support vector machine (SVM) classifier to recover our sample of anomalous galaxies using only encoded features. We use the implementation from , with a radial basis function (RBF) kernel and scaled kernel coefficients. In each of five trials, we randomly split the combined sample into 80% train and 20% test sets. We then balance our three classes across the training set withand evaluate performance on the test set. We calculate the fraction of each class recovered by the classifier in each case, and average the model's performance across all five trials. We find that our CVAE recovers 94±2% of Red Spiral galaxies with a 5% contamination rate from non-anomalous galaxies, and 67±0% of Green Pea galaxies with a 2% contamination rate from non-anomalous galaxies (though we caution that each test sample only contains 2-3 Green Peas). The same classifier trained on the latent features of the vanilla CVAE recovers 94±2% of Red Spiral galaxies with a 6% contamination rate and 47±27% of Green Pea galaxies with a 4% contamination rate. We conclude that the physics-informed CVAE achieves marginally superior, and significantly more consistent, performance over the vanilla CVAE in recovering Green Peas. We next plot 2D projections of the latent space for the physics-informed and vanilla CVAE models in Fig. <ref>. For the physics-informed plots, we include the decision boundaries from a 2d SVM with linear kernel. Though the Red Spirals fall toward the edges of the latent parameter distributions in the baseline, the plots lack physical intuition. The latent space of the physics-informed CVAE, however, reveals the properties that make these sub-populations anomalous. Green Pea galaxies push toward the highest star-formation rates, lowest stellar masses, and lowest “morphology” values of the sample (this can be considered a proxy for compactness). Red Spirals, in contrast, are the most extended objects in the sample and have low star-formation rates for their stellar masses, with most falling in or below the `green valley' that distinguishes active from quenched galaxies <cit.>. Finally, we identify galaxies in our sample most similar to the labeled Green Peas and Red Spirals. We use themethod into identify ten galaxies in the non-anomalous sample with latent features most similar to each galaxy in the anomalous sample. We present a sample of galaxies with lowest latent-space Minkowski distance to multiple Red Spirals, and those with lowest distance to multiple Green Peas, in Fig. <ref>. These galaxies are prime targets for upcoming follow-up studies, to confirm their associations and further characterize their evolutionary histories. The ability to rapidly infer physical parameters competitive with state-of-the-art SED models using only three images of a galaxy is encouraging. Once trained, evaluation of our CVAE takes 7 ms per galaxy on an Apple M2 Max chip. This is substantially less than the ∼10 hrs required for rigorous SED fitting using modern approaches <cit.>, and >100x faster than existing simulation-based inference (SBI) models optimized for this task <cit.>. With our approach, estimating the redshift, stellar mass, and star-formation rate of all 20B galaxies observed by the Vera C. Rubin Observatory would require only ∼1500 CPU-hours (albeit without the added benefit of full posterior distributions, as are provided by SBI). New, neural network-based inference approaches like the one presented here will help bring scalability and interpretability to petabyte-scale data exploration among astronomical surveys, and facilitate the search for new galaxy sub-types. We thank the anonymous reviewers for helpful feedback which improved this manuscript. This work is supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/). VAV acknowledges support by the NSF through grant AST-2108676. The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID #2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation.NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l’Espai (IEEC/CSIC), the Institut de Fisica d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF’s NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program “The Emergence of Cosmological Structures” Grant #XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant #114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant #12120101003, #11433005).The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/. | http://arxiv.org/abs/2312.16687v1 | {
"authors": [
"Alexander Gagliano",
"V. Ashley Villar"
],
"categories": [
"astro-ph.IM",
"astro-ph.GA"
],
"primary_category": "astro-ph.IM",
"published": "20231227185959",
"title": "A Physics-Informed Variational Autoencoder for Rapid Galaxy Inference and Anomaly Detection"
} |
Supplying data augmentation to conversational question answering (CQA) can effectively improve model performance. However, there is less improvement from single-turn datasets in CQA due to the distribution gap between single-turn and multi-turn datasets. On the other hand, while numerous single-turn datasets are available, we have not utilized them effectively. To solve this problem, we propose a novel method to convert single-turn datasets to multi-turn datasets. The proposed method consists of three parts, namely, a QA pair Generator, a QA pair Reassembler, and a question Rewriter. Given a sample consisting of context and single-turn QA pairs, the Generator obtains candidate QA pairs and a knowledge graph based on the context. The Reassembler utilizes the knowledge graph to get sequential QA pairs, and the Rewriter rewrites questions from a conversational perspective to obtain a multi-turn dataset S2M. Our experiments show that our method can synthesize effective training resources for CQA. Notably, S2M ranks 1st place on the QuAC leaderboard[1] at the time of submission (Aug 24th, 2022).[2]Equal contribution. [1]Corresponding Author. Email: [email protected]. [1]https://quac.ai/ § INTRODUCTIONThe task of conversational question answering, which requires machines to answer questions through reading and understanding a given context and history Question-Answer pairs, has been a rapidly growing area in natural language understanding <cit.>. With the development of pre-trained language models <cit.>, the upper limit of CQA is constantly broken. However, they are still limited by the scale of real-world datasets. More annotated datasets are needed to promote conversational question answering development. To alleviate the limitation of data scale, mainstream research has explored two methods. For example, there are a large number of single-turn datasets <cit.> in reading comprehension. While numerous single-turn datasets are available, there are some aspects that have not been fully exploited. Recent studies have shown that the distribution shift between datasets severely constrains only using single-turn datasets. The second method is to automatically generate label datasets <cit.>, whic can generate datasets according to the distribution of target domains. However, generating CQA datasets is challenging and requires interaction between question-answer pairs.Therefore, the existing literature mainly discusses a part of the whole process. On the one hand, research on conversational question generation is dedicated to generating follow-up questions <cit.>; on the other hand, research on conversational question answering aims to improve the accuracy of answers <cit.>. As far as we know, only SIMSEEK <cit.> combines the two methods. In addition to the above research directions, some researchers have studied the generation of single-turn question-answer pairs. Different from these studies, we propose a method to better usesingle-turn datasets, which alleviates the distribution shift between single-turn and multi-turn datasets and generates more complete conversations than SIMSEEK, as shown in Figure <ref>.In this paper, we propose a method to convert single-turn datasets to multi-turn datasets. It consists of three modules, the candidate single-turn QA pair Generator, the QA pairs Reassembler, and the conversation question Rewriter. Firstly, the Generator generates a large number of single-turn candidate QA pairs. Then, the Reassembler forms sequential QA pairs from the generated and existing QA pairs. Finally, the Rewriter converts these self-contained questions to questions related to the specific conversation.A conversational dataset called S2M has been generated by the proposed method and evaluated by the CQA task. We conducted unsupervised and supervised experiments on the challenging CQA benchmark QuAC <cit.>. In the unsupervised case, our experimental results demonstrate the effectiveness of the synthetic conversations from S2M that improve performance of baselines and reduce the performance gap between the unsupervised setting and the supervised setting. In the case of supervision, with the help of S2M, our model has achieved state-of-the-art performance on the validation set and ranked 1st on the leaderboard test set. To further verify the quality of S2M, we conducted a human evaluation to compare S2M with the original dataset QuAC and other generated datasets <cit.>. The results have shown that the quality of the conversations in S2M is higher than others in terms of answer accuracy, context relevance, and overall adequacy. Our contributions are summarized as follows :* We verify defects in directly using existing datasets for data augmentation. Our results also indicate the possibility of converting single-turn to multi-turn. As far as we know, this work is the first attempt to successfully convert single-turn datasets to multi-turn datasets. * We propose a new method to build a new dataset called S2M. Extensive numerical results have been conducted on the QuAC benchmark. It is worth mentioning that the proposed method obtains state-of-the-art performance on the validation set and ranks 1st on the benchmark test set. * A human evaluation of S2M has also been conducted to show that the conversations in S2M are more popular than other synthetic datasets for annotators. § APPROACHIn this section, we first formalize the CQA task and introduce two types of mainstream methods to generate datasets. Then, we introduce our method with three components. In our method, we first introduce how to generate high-quality single-turn question-answer pairs. Next, we introduce how to construct and use a knowledge graph to obtain sequential question-answer pairs. Finally, we introduce the question Rewriter to obtain question-answer pairs that depend on the conversation. §.§ Background§.§.§ Task FormulationLet C=[s_1, s_2, ..., s_N] denote the context, where the sentence s_i = {w_1, w_2,..., w_M} contains M_i tokens. Given a question q_t with t being the turn of given question, CQA is asked to answer it correctly from the context C based on its history H_<t=[(q_1, a_1), ..., (q_t-1, a_t-1)]. §.§.§ Data Generation Methods of CQAAlthough generating CQA datasets is a challenging task and there are a limited number of previously available works, the existing methods can be summarized into the following two types of methods, as shown in Figure <ref>. In both methods, a context encoder encodes C into a series of contextualized feature representations {h_i}^L_i=1, as shown below, where L is the length of the input context. {𝐡_𝐢}_𝐢=1^𝐋 = ContextEncoder(C, H)H represents the historical conversation information, which is optional and only effective in model-a of Figure <ref>. The context encoder ContextEncoder could be Long-Short Term Memory Network(LSTM) <cit.> or pre-trained language models, e.g., BERT <cit.>. Second, to obtain answers a_t for generating a question q_t, {h_i}^L_i=1 are projected onto start logit and end logit through multi-layer perceptrons separately. Both logits are then sent to a softmax function to compute the start and end probability distributions along all tokens in the context as shown in:r^s_i= 𝐖^s_2tanh(W^s_1𝐡_𝐢) r^e_i= 𝐖^s_2tanh(W^e_1[𝐡_𝐢; 𝐡_𝐬]) p^s= softmax(r^s) p^e= softmax(r^e) where 𝐖^s_1, 𝐖^s_2, 𝐖^e_1, 𝐖^e_2 are trainable parameters of the projection functions. 𝐡_s is the token representation of the start label, and p^s ⊆ℝ^L, p^e⊆ℝ^L are the start and the end probability distributions over all tokens, respectively. Based on the start and end logits, we obtain a set of candidate answers Â_t = {â_t^1, â_t^2, ... , â_t^k}. We choose the result a_t of the highest score as the answer to question q_t. Third, after extracting the answer a_t, there are subtle differences between the two types of question-generation methods. On the one hand, the question q_t is generated based on the extracted answer a_t and context C, e.g. p(q_t | C, a_t); on the other hand, the conversational history H_<t is additionally considered when generating a question q_t, e.g. p(q_t | C, a_t, H_<t). These methods have been widely proven to be effective in the CQA task. It is worth mentioning that both methods generate data based on contexts. If the context itself carries some single-turn question-answer pairs, they are powerless. Therefore, we propose the third type of method that converts single-turn datasets to multi-turn datasets. Next, we will introduce the three modules of the proposed method, as shown in Figure <ref>.§.§ Single-Turn Candidate Question-Answer Pair Generator Given a context C in the single-turn dataset, we utilize the RGX <cit.> framework to generate a large number of question-answer pairs and their corresponding credible scores S_l. The scores S_l are denoted as the QAE loss:S_l= -(logP(I_st) + logP(I_ed))where P(I) is the probability of position I in C. I_st and I_ed denote the start and end positions of every answer in the context C, respectively.To get high-quality generated QA pairs, we evaluate them in terms of challenge and noise. Different from the RGX framework processing method, with the help of the EM algorithm <cit.>, we divide the QA pairs into four categories based on the credit scores of their questions: low, relatively low, medium, and high, respectively. The higher the level of QA pairs, the more challenging the questions in QA pairs and the more noisy the answers in QA pairs. The low indicates that the noise is low, but the challenge is also low. The high indicates that the challenge is high, but the noise is also high. We filter the low and high extreme QA pairs and select the remaining. Since there is a lot of redundancy in the question-answer pairs generated by this framework, we propose the Improved Union Search algorithm, which merges redundant QA pairs and selects the QA pair with a medium score as their representative. We consider question-answer pairs with more than half of the total words repeated as redundant and add them to the redundant set. In the redundant set, the question-answer pair of the intermediate credit score is regarded as their representative.§.§ Knowledge Graph for the Sequential Question-Answer PairsIn addition to the existing QA pairs in the single-turn dataset, we have obtained a large number of generated QA pairs. In this subsection, we will introduce how to construct a knowledge graph and how to use it to obtain sequential QA pairs. §.§.§ Constructing Knowledge Graph from the ContextThe knowledge graph is constructed based on structured information. To obtain the knowledge graph of a given context, we need to extract its triples. The current mainstream information extraction models <cit.> have shown that the open-source information extraction model is most suitable for the context from an arbitrary domain. Inspired by them, we choose the current state-of-the-art model OpenIE6 <cit.> to extract triples of the context.After extracting the triples of each sentence in the context, we realize that they are too complex to obtain the knowledge graph. As shown in Figure <ref>, the triples of adjacent sentences are not simply connected by the same head and tail. Therefore, we propose a Triples Join Algorithm based on the assumption that the triples of adjacent sentences are more semantically related.As shown in Figure <ref> and Algorithm <ref>, we connect triples based on the following three principles: * Principle 1 The subject or object between two triples is the same or contained. For example, the subject or object of triple-A is equal to or contains the subject of triple-B.* Principle 2 If there is an unconnected triple in the sentence, we connect it to an adjacent triple.* Principle 3 If there is no connected triple between adjacent sentences, we connect the last triple of sentence c_i with the first triple of sentence c_i+1. §.§.§ Obtaining Sequential QA Pairs from the Knowledge GraphAfter obtaining the knowledge graph representing the information flow of context, we use OpenIE6 <cit.> to extract triples from QA pairs as the primary information. The process of connecting sequential QA pairs in a coherent conversation can be thought of as performing a systematic walk over the Knowledge Graph <cit.>. Based on the existing knowledge graph and triples, we obtain sequential QA pairs in two steps. First, we match QA pairs and the knowledge graph. The nodes in the knowledge graph correspond to a triple. As shown by the Reassembler in Figure <ref>, we mark the node when the knowledge graph node is the same as the triple in QA pairs. We match the generated and existing QA pairs with the knowledge graph, respectively. Second, we traverse the knowledge graph to obtain sequential QA pairs. We traverse all nodes from the root node of the knowledge graph and obtain all continuous masked nodes in order. We replace the mask nodes with matching QA pairs to obtain all sequential QA pairs of the context. §.§ Question Rewriting for CQAAlthough the sequential QA pairs have a strong coherence between them, there is a lack of dialogue style between questions. Therefore, we convert self-contained questions to questions that depend on the conversation. We introduce how to rewrite the questions by the rewriting model in the following two steps.§.§.§ Obtaining the Questions Rewriting DatasetSpecifically, we build our rewriting dataset R-CANARD based on CANARD <cit.>. Given an instance (C, H^c_<t, q^c_t, q_t) in CANARD where q^c_t and q_t represent the follow-up question in one conversation and the self-contained question, respectively. H^c_<t is the list of (q^c_t, a_t), representing the historical conversation of q^c_t, and q_t is the rewriting target. According to the task, we construct an instance in R-CANARD as (C, H_<t, q_t, q^c_t), however H_<t is the historical conversation of q_t, q^c_t is our rewriting target. In traditional question rewriting methods, researchers modify follow-up questions so that they can be correctly interpreted outside of the conversational context <cit.>. Different from them, we reverse the question rewriting process by converting self-contained questions to questions that depend on the conversation. We build a new dataset and train a reverse question rewriting model for this task.§.§.§ Training the Conversational Question Rewriting Model Grounded on each self-contained question, the question rewriting model generates a follow-up question based on context. Thus, it should satisfy multiple objectives simultaneously, generating proper questions and aligning with historical conversations. Formally, the Conversational Question Rewriting(CQR) model rewrites the self-contained question based on context and its history, e.g. p^CQR_q(q^c_t | C, H_<t, q_t). Now, we use the CQR model trained on the R-CANARD dataset to rewrite the questions in the sequential question-answer pairs. As shown in Figure <ref>, we give two representative examples. First, we use pronouns to replace nouns that appear repeatedly between questions. Second, we omit the unimportant part of the question. Note that the rewritten questions are more realistic. § EXPERIMENTAL SETUP In this section, we first introduce the datasets and baselines. Then, we define the evaluation metrics. Finally, we show the details of hyperparameters in our model. §.§ DatasetsTo obtain synthetic conversations, we conduct experiments on one real-world dataset (i.e., SQuAD 2.0) <cit.>. In the following, we use SQuAD to represent SQuAD2.0. Note that each instance in SQuADcontains one context and countless single-turn QA pairs. To evaluate the effect of synthetic conversations, we train baselines on the recent CQA benchmark, QuAC <cit.>, which consists of 100k QA multi-turn pairs. CANARD converts questions in QuAC to self-contained ones that can be understood without the conversation. Unlike CANARD, we construct the R-CANARD dataset, which converts self-contained questions in CANARD to the follow-up questions.§.§ Baselines for Synthetic CQA GenerationWe introduce solid baselines for synthesizing CQA datasets and compare them with our method. Due to the scarcity of previous work, we choose representative generation methods RGX and SIMSEEK as our baselines.* RGX This model is proposed by Luo <cit.>, one of the dominant frameworks in single-turn QA tasks. Compared with the traditional QA generation model, it leverages a self-training technique to improve the performance of both question generation and answer extraction. * SIMSEEK The model proposed by Kim <cit.> is the previous state-of-the-art method for synthesizing CQA datasets. Compared with RGX, it considers the conversation history when generating QA pairs.After building these synthetic conversations, we will use three different backbone architectures, ROBERTA-large, ElECTRA-large ,and DEBERTA-large, to verify the effectiveness of our method comprehensively.§.§ Evaluation MetricsTwo metrics are used to evaluate answer span prediction in the QuAC task: word-level F1 and human equivalence score (HEQ). The former measures the overlap between the predicted and actual spans while ignoring stopwords. The latter measures the percentage of examples for which the model's F1 score is higher than the average human F1 score. HEQ has two versions, HEQ-Q and HEQ-D. HEQ-Q calculates the percentage of questions for which the model's predictions exceed the human assessment, while HEQ-D calculates the percentage of conversations in which all questions exceed the human assessment. §.§ Implementation DetailsWe train Pre-trained Language Models (PLMs) in the following two stages. During training, we found that the order of training on the synthetic conversation and QuAC dataset affects the final results. The later the training order of QuAC is, the better the experimental results will be. Therefore, we first train PLMs on synthetic conversation, followed by the QuAC dataset. We report both the intermediate and final results, which correspond to the performance in the unsupervised and supervised environments, respectively.§.§.§ Hyperparameters In the experiments, the max sequence length of questions is set to 128 and the answer length is set to 64. The stride of the sliding window for splitting documents is set to 128. The batch size is set to 16. The model is optimized using Adam, and we set the learning rates to 1e - 6, 1e-5, and 1e-5 for ROBERTA, ELECTRA, and DEBERTA, respectively. The random seed is always set to 42. In the inference process, we use beam search to predict the end position based on the start position, and the beam size is 5. All the other hyper-parameters are the same as reported in the corresponding papers. We run our experiments on 4×A100 GPUs. Training our models on synthetic conversations and the QuAC dataset takes about 12 hours.To compare fairly with the previous best performance model SIMSEEK, we also employ another CQA dataset, CoQA, as an additional training resource. Without special instructions, we add the CoQA dataset when training by default. § MAIN RESULTS In this section, we introduce the construction process of synthetic conversations S2M and compare results with other datasets. Then, we conduct experiments on two datasets of the QuAC benchmark. First, we test the S2M dataset on three backbones and compare the results with other generated datasets on the QuAC validation set. Second, we test the S2M dataset on the QuAC benchmark test set. §.§ Dataset ConstructionGiven the dataset SQuAD, we use our method to generate conversations based on the context and single-turn QA pairs. Specifically, we generate conversations until the twelfth turn or meet the termination condition. The termination conditions include the discontinuity of QA pairs and more than three unanswerable questions in one conversation. Table <ref> shows the overall statistics of S2M and its comparison with other datasets. It can be seen that the dataset we generated is only one-seventh of the SIMSEEK dataset.§.§ Results on QuACWe train PLMs on the S2M dataset and the QuAC dataset. Table <ref> shows the results of the two-stage experiments on the QuAC development set. In the first stage, we trained on synthetic conversations, and the results showed that the performance of PLMs in the unsupervised setting was significantly improved. Specifically, through S2M, the DEBERTA's performance gap between the unsupervised setting and fully supervised setting is 15.5, which shows the authenticity of the synthetic conversations. In the second stage, we further fine-tuned models on the QuAC dataset, and the results showed that all PLMs increased by 1.3 on average, proving the generalizability of our method. Among all PLMs, DEBERTA performs the best, second only to ELECTRA.Under the DEBERTA backbone, we further compare our generation method with other generation methods. Table <ref> shows the performance of DEBERTA trained on the resulting datasets. While other methods can improve performance, there are some limitations compared to our method. DEBERTA-RGX shows the lowest performance. It implies the difficulty of directly extending generated single-turn QA methods to CQA. Adopting CQG modules that consider historical conversations (DEBERTA-SIMSEEK) improves CQA performance by more than 1.5 F1 scores. This indicates that understanding conversational questions is crucial for improving CQA performance. Although S2M is only one-seventh the size of the SIMSEEK dataset, DEBERTA-S2M achieves the highest score when considering both single-turn QA and conversational context. Specifically, DEBERTA-S2M outperforms the DEBERTA-SIMSEEK in terms of F1/HEQ-Q/HEQ-D, indicating that it is particularly effective for CQA despite the smaller dataset size. §.§ Official leaderboard results on QuACQuAC challenge provides a hidden test set. Table <ref> displays the span prediction results of all baselines and our model. From this, we can see that our model DEBERTA-S2M outperforms the previous best performance model CDQ-DEBERTA and achieves new state-of-the-art performance on all three metrics. From the leaderboard results, we observe that the top-ranking models mainly use advanced pre-trained models and consider historical information. For example, CDQ-DEBERTA boosts the F1 score of ROR from 74.9 to 75.8 with the help of DEBERTA. Compared to BiDAF++, BiDAF++w/ 2-Context incorporates two turns of previous dialog history and significantly improves the performance of BiDAF++.We are the first to use generation datasets to help the CQA task and achieve success. § ANALYSIS§.§ Qualitative Analysis§.§.§ S2M is essential for data augmentation in the CQA task.Table <ref> records the data augmentation results on the single-turn SQuAD, multi-turn CoQA, and S2M datasets. Here S2M does not contain the CoQA dataset, only for generating data. Although their contexts are all sourced from Wikipedia, the results are pretty different. In contrast, CoQA has a better boost than SQuAD. This is because both CoQA and QuAC are multi-turn conversation datasets. This situation illustrates the advantages of converting a single-turn dataset to a multi-turn dataset. On the other hand, CoQA is not as effective as S2M. This might be caused by the distribution shift between them and the target dataset, such as the shorter answer length of CoQA. Our method can be one of the solutions to mitigate the shift and provide further improvements. Furthermore, our method is a model-agnostic framework where all QG models can be adopted as a generator for obtaining conversation datasets.§.§.§ Conversations in S2M are more relevant and coherent.In Table <ref>, we compare the generated dialogues obtained by various methods with QuAC. These data show similarities in the token lengths of questions and answers. In contrast, S2M achieves higher scores at the word level f1, which measures how many words in the current and historical responses are reused in the current question, respectively. From the results, on the one hand, the question and answer in a QA pair from S2M are more correlated; on the other hand, the dialogue in S2M has better coherence.§.§ Human EvaluationIn this section, we conduct a human evaluation to detect the quality of synthetic conversations. Specifically, we employ in-house annotators to assess the quality of follow-up QA pairs. We hired five annotators to score the QA pairs of S2M and SIMSEEK, QuAC and S2M according to the four metrics inspired by SIMSEEK <cit.>. We let the annotators select 800 contexts from each dataset, obtaining about 5000 QA pairs. Samples were repeatedly scored by two annotators on the four metrics. The four metrics are detailed below: * Overall adequacy represents how adequate the QA pair is for continuing the conversation.* Informativeness represents the amount of new information the question is trying to gather.* Context relevance represents how relevant the question is to the given context.* Answer accuracy represents whether the answer is accurate for the question. Figure <ref> compares S2M with other datasets using these metrics. We found that they are similar in informativeness. However, our method has better results in the remaining three metrics.§.§.§ S2M benefits from the knowledge graph structure; they are more adequate.Higher overall adequacy and context relevance have been observed in S2M. This is due to our context-based knowledge graph structure. It ensures that the conversation is advanced step by step according to context. In contrast, SIMSEEK relies entirely on pre-trained models and lacks guidance information, resulting in a break from context; QuAC requires concise responses, which seems relatively less enthusiastic. Therefore, our synthetic conversations are selected more frequently in terms of overall adequacy and context relevance. §.§.§ Conversations from S2M are reviewed as more accurate even than humans. Furthermore, annotators conclude that S2M has higher accuracy. As shown in Figure <ref>, S2M is highly interactive between the answer and the current question, which makes it considered more helpful and communicative by annotators. In summary, our method successfully generates accurate QA pairs in each turn, which are more adequate. § RELATED WORKConversational Question Answering.With the release of large-scale CQA benchmarks <cit.>, more and more researchers are studying this challenging task. At the beginning, researchers made structural improvements <cit.>, such as adding historical dialogues and cutting the context to obtain answers <cit.>. With the popularity of large-scale pre-trained language models <cit.>, more researchers used pre-trained language models and data augmentation methods, such as ROR <cit.>. Later, due to the limited data scale, researchers automatically generate data to help the model learn external knowledge. Our method starts from the third paradigm and automatically generates data <cit.>, improving model performance and reducing labor costs.Data Augmentation. In theCQA task, there are two types of data augmentation. The first method uses existing datasets <cit.> for data augmentation. This method is limited by data scale and distribution shift of the existing datasets <cit.>. Directly using single-turn and multi-turn datasets will reduce task performance <cit.>; the second method generates datasets for the target task. However, the work is rare, and only SIMSEEK <cit.> has studied the field. They generate multi-turn conversation datasets. Unlike their methods, we convert a single-turn dataset to a multi-turn dataset. We consider both single-turn QA and dialogue contexts. Conversation Question Rewrite.Question rewriting(QR) technology has been successfully applied in various fields, such as question integration, question optimization, and question expansion <cit.>. More recently, QR has been shown to be helpful in the field of conversational question answering <cit.>. QR was first introduced to the CQA task by Elgohary, and they released the CANARD dataset <cit.>, which provides rewrites for the conversational questions from the QuAC dataset. Unlike CANARD, we try to apply QR to the conversational data generation task and propose the conversation question rewriting task, which rewrites the self-contained question as a question that depends on the conversation.§ CONCLUSIONIn this paper, we discussed the shortcomings of existing datasets for data augmentation in the field of conversation question answering. To solve these problems, we proposed a method of converting single-turn datasets to multi-turn datasets. Using this method, we constructed a high-quality S2M dataset and verified its performance on the validation set and test set of the public dataset QuAC. Notably, we ranked 1st in the QuAC benchmark. Finally, we compared our generation method with the previous state-of-the-art method from the qualitative analysis and human evaluation perspective. The results showed that our synthetic conversations have better results in terms of answer accuracy, context relevance, and overall adequacy. A potential research direction is to jointly optimize all components of the proposed method. § ACKNOWLEDGEMENTSThis work was supported in part by Ant Group. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. | http://arxiv.org/abs/2312.16511v1 | {
"authors": [
"Baokui Li",
"Sen Zhang",
"Wangshu Zhang",
"Yicheng Chen",
"Changlin Yang",
"Sen Hu",
"Teng Xu",
"Siye liu",
"Jiwei Li"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231227104118",
"title": "S2M: Converting Single-Turn to Multi-Turn Datasets for Conversational Question Answering"
} |
fancy Ensemble Learning to Assess Dynamics of Affective Experience Ratings and Physiological ChangeAll authors contributed equally to this work.Felix Dollack1, Kiyoshi Kiyokawa1, Huakun Liu1, Monica Perusquia-Hernandez1, Chirag Raman2, Hideaki Uchiyama1, Xin Wei11Nara Institute of Science and Technology2Delft University of Technology{felix.d, kiyo, liu.huakun.li0, m.perusquia, hideaki.uchiyama, wei.xin.wy0}@is.naist.jp, [email protected] 14, 2024 ========================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Mechanism Design seeks to establish protocols for aggregating the private information of a set of agents to optimize a global objective. Nonetheless, optimizing a communal goal solely based on reported preferences frequently leads to undesirable manipulation driven by the agents' self-interested behaviour. Hence, a key property that a mechanism should possess is truthfulness, which ensures that no agent can gain an advantage by misrepresenting their private information. Unfortunately, this stringent condition often clashes with the goal of optimizing the communal objective, leading to suboptimal outcomes from a truthful mechanism. To quantify the efficiency loss entailed by a truthful mechanism, Nisan and Ronen introduced the concept of approximation ratio, which represents the maximum ratio between the objective achieved by the truthful mechanism and the optimal objective attainable across all possible agents' reports <cit.>.A prominent problem in Mechanism Design is the m-Facility Location Problem (m-FLP) <cit.>.In its fundamental guise, the m-FLP involves locating m facilities amidst n self-interested agents.Each agent requires access to a facility, making it preferable for a facility to be located as close as possible to their position. Furthermore, each facility can accommodate any number of agents, thus the agents can select which facility to use devoid of any concern about possible overload.A natural extension of the m-FLP is the m-Capacitated Facility Location Problem (m-CFLP), in which every facility has a capacity limit.The capacity limit constraints the amount of agents that the facility can serve.In this case, the solution does not only elicit the positions of the facilities, but it also specifies to which facility every agent is assigned to, ensuring that no facility is overloaded.To the best of our knowledge, there are very few works that analyzed the Mechanism Design aspects of the m-CFLP.Moreover, all the existing results are conducted in the classic worst-case analysis, where the designer has no information on the agents and therefore aims to define mechanisms that work well on every possible input, regardless of the likelihood of the input. This type of analysis is, however, too pessimistic and gives little insight into how to select a mechanism for a specific task.For example, there are currently no mechanisms capable of locating more than two capacitated facilities while achieving a finite approximation ratio.Furthermore, even if we restrict our attention to the case where m=2, the approximation ratio of all the known truthful mechanisms depends linearly on the number of agents <cit.>, making these efficiency guarantees less meaningful as the scale of the problem increases.In this paper, we overcome these issues by studying the m-CFLP from a Bayesian viewpoint.In Bayesian Mechanism Design, every agent's location is a random variable whose law is known to the designer <cit.> thus the scope of the mechanism is not to minimize the cost in the worst possible case but rather to define truthful routines that work well in expectation. §.§ Our Contribution Our contribution is as follows:* We define and study the Extended Ranking Mechanisms (ERMs), a generalization of the class of Ranking Mechanisms introduced in <cit.>.Our extension allows to tackle a broader framework in which the combined capacity of the facilities surpasses the total amount of agents. Moreover, a Ranking Mechanism is truthful if and only if it places all the facilities at, at most, two locations, while a ERM does not necessarily require such constraint to ensure truthfulness.See Section <ref>. * We then delve into a Bayesian Framework to examine the asymptotic performances of the ERMs.We consider the case in which every agent's position is represented by a set of i.i.d. random variables.Our investigation reveals that m-CFLP is equivalent to a norm minimization problem over a subset of the Wasserstein space <cit.>.By leveraging the properties of the Wasserstein distance, we then establish the convergence of the Bayesian approximation ratio, i.e., the ratio between the expected cost incurred by the mechanism and the expected optimal cost, as the number of agents tends toward infinity. This is our primary result, as seen in Theorem <ref>.We also compute this limit and show that it depends solely on the specifics of the problem, namely, the probability distribution μ describing the agents, the vector q⃗ determining the capacities of the facilities, and the characteristics of the mechanism.See Section <ref>. * After that, we tackle the problem of identifying an optimal ERM tailored to the m-CFLP with agents distributed according to μ given the capacities determined by q⃗.First, we show that an optimal ERM always exists and characterize it as a solution to a minimization problem.Subsequently, we narrow our focus on two specific scenarios:[label=(*)]* no-spare capacity framework in which the total capacity of the facilities matches the total number of agents, and* the 2-CFLP for a population of agents distributed according to a uniform distribution. See Section <ref>. * Finally, we validate our findings through extensive numerical experiments in Section <ref>. In particular, we compare the performances of the ERMs with the performances of other truthful mechanisms, such as the InnerGap Mechanism <cit.> and the Extended Endpoint Mechanism <cit.>.From our experiments, we observe that a well-tuned ERM outperforms all the other mechanisms whenever n≥ 20.From these comparisons, we also observe that the limit Bayesian approximation ratio is a reliable estimation of the Bayesian approximation ratio of the ERM when the number of agents is greater than 20.Our outcomes shed light on how to outline a mechanism depending on the problem, contributing to a better understanding of the strategic aspects of the m-CFLP in a Bayesian framework.Due to space limits, all the missing proofs are deferred to Appendix <ref>.§.§ Related Work The m-Facility Location Problem (m-FLP) and its variations are significant issues in various practical domains, such as disaster relief <cit.>, supply chain management <cit.>, healthcare <cit.>, clustering <cit.>, and public facilities accessibility <cit.>.Procaccia and Tennenholtz initially delved into the Mechanism Design study of the m-FLP, laying the groundwork for this field in their pioneering work <cit.>.Following that, a range of mechanisms with constant approximation ratios for placing one or two facilities on trees <cit.>, circles <cit.>, general graphs <cit.>, and metric spaces <cit.> were introduced.Despite the generality of the underlying space, it is important to stress that all these positive results are confined to the case in which we have at most two facilities to place and/or the number of agents is limited.Moreover, different works tried to generalize the initial framework proposed in <cit.>, by considering different agents' preferences <cit.>, different costs <cit.>, and additional constraints <cit.>.In this paper, we analyze the m-Capacitated Facility Location Problem (m-CFLP), a variant of the m-FLP in which each facility can accommodate a finite number of agents.The Mechanism Design aspects of the m-CFLP have only recently begun to attract attention.Indeed, the game theoretical framework for the m-CFLP that we consider was introduced in <cit.>.This work defined and studied various truthful mechanisms, like the InnerPoint Mechanism, the Extended Endpoint Mechanism, and the Ranking Mechanisms.A more theoretical study of the problem was then presented in <cit.>, demonstrating that no mechanism can position more than two capacitated facilities while adhering to truthfulness, anonymity, and Pareto optimality.Additionally, another paper dealing with Mechanism Design aspects of the m-CFLP is <cit.>.However, it explores a different framework where only one facility needs to be placed and is unable to serve all agents.To the best of our knowledge, all the results on the m-CFLP concerned the classic Mechanism Design framework that evaluates the performances of the mechanism based on worst-case analysis. In this paper, we consider an alternative approach: the Bayesian Mechanism Design perspective.Unlike traditional Mechanism Design, where the designer lacks information about agent types, in the Bayesian Mechanism Design framework each agent's type follows a known probability distribution <cit.>.In this framework, we are able to determine a probability distribution over the set of all the possible inputs of the mechanism, enabling us to consider the expected cost of a mechanism.Bayesian Mechanism Design has been applied to investigate routing games <cit.>, facility location problems <cit.>, combinatorial mechanisms using ϵ-greedy mechanisms <cit.>, and, notably, auction mechanism design <cit.>. § PRELIMINARIES In this section, we fix the notations on the m-CFLP, Bayesian Mechanism Design, and Optimal Transport (OT). The m-Capacitated Facility Location Problem.Given a set of self-interested agents [n]:={1,…,n}, we denote with x⃗∈^n the vector containing the agents' positions.Likewise, given m∈ℕ, we denote with c⃗∈ℕ^m the vector containing the capacities of the facilities, namely c⃗=(c_1,…,c_m).In this setting, a facility location is defined by three objects: [label=(*)]* a m-dimensional vector y⃗=(y_1,…,y_m) whose entries are m positions on the line, * a permutation π∈𝒮_m that decides the capacity of the facility built at y_j, so that c_π(j) is the capacity of the facility built at y_j, and* a matching Γ⊂ [n]× [m] that determines how the agents are assigned to facilities, i.e. (i,j)∈Γ if and only if the agent at x_i is assigned to y_j. Due to the capacity constraints, the degree of vertex j∈[m] according to Γ is at most c_π(j). Similarly, every agent is assigned to only one facility, thus the degree of every i∈[n] according to Γ is 1.Given the positions of the facilities y⃗ and a matching Γ, we define the cost of an agent positioned in x_i as d_i,Γ(x_i,y⃗)=|x_i-y_j|, where (i,j) is the unique edge in Γ adjacent to i.Given a matching Γ and a permutation π∈𝒮_m, a cost function is a map C_Γ:^n×^m→ [0,+∞) that associates to (x⃗ , y⃗) the overall cost of placing the facilities at the positions in y⃗ following the permutation π and assigning the agents positioned at x⃗ according to Γ.[In what follows, we omit Γ from the indexes of d and C if it is clear from the context which matching we are considering.]Given a vector x⃗∈^n containing the agents' positions, the m-Capacitated Facility Location Problem with respect to the cost C, consists in finding the locations for m facilities, a permutation π, and the matching Γ that minimize the function y⃗→ C(x⃗,y⃗).The most studied cost function is the Social Cost (), which is defined as the sum of all the agents' costs.Since multiplying the cost function by a constant does not affect the approximation ratio results, throughout the paper we consider the Social Cost rescaled by the total number of agents, that is (x⃗, y⃗)=1/n∑_i∈[n]|x_i- y_j|. Mechanism Design, the Worst-Case Analysis, and the Ranking Mechanisms.An m-facility location mechanism is a function f that takes the agents' reports x⃗ in input and returns a set of m positions y⃗ on the line, a permutation π∈𝒮_m, and a matching Γ to allocate the agents to the facilities. In general, an agent may misreport its position if it would result in a set of facility locations such that the agent's incurred cost is smaller than reporting truthfully.A mechanism f is said to be truthful (or strategy-proof) if, for every agent, its cost is minimized when it reports its true position. That is, d_i(x_i,f(x⃗))≤ d_i(x_i,f(x⃗_-i,x_i')) for any misreport x_i'∈, where x⃗_-i is the vector x⃗ without its i-th component.Although deploying a truthful mechanism prevents agents from getting a benefit by misreporting their positions, this leads to a loss of efficiency.To evaluate this efficiency loss, Nisan and Ronen introduced the notion of approximation ratio <cit.>.Given a truthful mechanism f, its approximation ratio with respect to the Social Cost is defined as ar(f):=sup_x⃗∈ℝ^nSC_f(x⃗)/SC_opt(x⃗), where SC_f(x⃗) is the Social Cost of placing the facilities and assigning the agents to them following the output of f, while SC_opt(x⃗) is the optimal Social Cost achievable when the agents' report is x⃗.The Ranking Mechanisms are a class of mechanisms for the m-CFLP that work under the assumption that the total capacity of the facilities matches the number of agents, <cit.>.Each Ranking Mechanism is defined by two parameters: a permutation π∈𝒮_m and a vector t⃗=(t_1,…,t_m)∈[n]^m. Given π and t⃗ the routine of the Ranking Mechanism is as follows: [label=(*)]* given x⃗ the vector containing the agents' reports ordered non-decreasingly, then the mechanism places the facility with capacity c_π(j) at x_t_j.* The agents are assigned to the facility from left to right while respecting the capacity constraints. It was shown in <cit.> that a Ranking Mechanism is truthful if and only if every t_j admits at most two different values.Moreover, the approximation ratio of these mechanisms is bounded only when the number of agents is even, the number of facilities to places is 2, and the two facilities have the same capacity.In this case, the mechanism is also called InnerPoint Mechanism (IM) and its approximation ratio is n/2-1.Bayesian Mechanism Design.In Bayesian Mechanism Design, every agent's type is described by a random variable X_i.In what follows, we assume that every X_i is identically distributed according to a law μ and independent from the other random variables.In this framework, a mechanism is truthful if, for every agent i, it holds_X⃗_-i[d_i(x_i,f(x_i,X⃗_-i))]≤_X⃗_-i[d_i(x_i,f(x_i',X⃗_-i))], ∀ x_i∈,where x_i agent i's true type, X⃗_-i is the (n-1)-dimensional random vector that describes the other agents' type, and _X⃗_-i is the expectation with respect to the joint distribution of X⃗_-i.It is easy to see that if a mechanism is truthful in the classic Mechanism Design framework, it is also truthful in the Bayesian framework.Given β∈, a mechanism f is a β-approximation if it holds [SC_f(X⃗_n)]≤β [SC_opt(X⃗_n)], whereis the expectation with respect to the joint distribution of X⃗_n.Similarly to what happens for the approximation ratio, the lower β is, the better the mechanism is.To unify the notation, we define the Bayesian approximation ratio of a mechanism f as the ratio between the expected Social Cost of a mechanism and the expected optimal Social Cost.More formally, given a mechanism f, its Bayesian approximation ratio with respect to the Social Cost is defined as followsB_ar(f):=[SC_f(X⃗_n)]/[SC_opt(X⃗_n)], where the expected value is taken over the joint distribution of the vector X⃗_n := (X_1,…,X_n).Notice that, if B_ar(f)<+∞, then f is a B_ar(f)-approximation. The Wasserstein Distance.Let us denote with () the set of probability measures over .Given γ∈(), we denote with spt(γ)⊂ the support of γ and with med(γ) the smallest median of γ.We denote with _m() the set of probability measures overwhose support contains at most m points, thus a measure ν∈_m() is such that ν=∑_j=1^m ν_jδ_x_j, where x_j∈ for all j∈ [m], ν_j are non-negative values satisfying ∑_j=1^mν_j=1, and δ_x_j is the Dirac delta measure centered at x_j.Given two measures α,β∈(), the Wasserstein distance between α and β is defined as W_1(α,β)=min_π∈Π(α,β)∫_×|x-y|dπ,where Π(α,β) is the set of probability measures over × whose first marginal is equal to α and the second marginal is equal to β <cit.>.For a comprehensive introduction to Optimal Transport theory, we refer to <cit.> and <cit.>.Basic Assumptions.Finally, we layout the basic assumptions of our framework.In what follows, we tacitly assume that the underlying distribution μ satisfies all the following properties:[label=(*)]* The measure μ is absolutely continuous. We denote with ρ_μ its density.* The support of μ is an interval, which can be bounded or not, and that ρ_μ is strictly positive on the interior of the support.* The density functionρ_μ is differentiable on the support of μ.* The probability measure μ has finite first moment, i.e. ∫_ |x|dμ < +∞. Notice that, according to this set of assumptions, the cumulative distribution function (c.d.f.) of μ, namely F_μ, is locally bijective. § THE EXTENDED RANKING MECHANISM Since our study aims to examine the behaviour of the mechanism as the number of agents goes to infinity, expressing the problem in terms of absolute capacities is unsuitable.For this reason, we need to rephrase the specifics of the m-CFLP in terms of percentages.In particular, instead of considering a collection of capacities c_j∈ℕ for each j∈ [m], we shift our focus to the percentage capacity vector (p.c.v.) q⃗∈(0,1)^m.Each value q_j corresponds to the percentage of agents that the j-th facility can accommodate.Given q⃗ and a number of agents n, we recover the absolute capacity of the j-th facility by setting c_j=q_j(n-1)+1.Conversely, when we are given the absolute capacities c_j for a given number of agents n, the corresponding p.c.v. is q_j=c_j/n.Without loss of generality, we assume that the entries of q⃗ are ordered non-increasingly. [Extended Ranking Mechanisms (ERMs)] Let q⃗=(q_1,…,q_m) be a p.c.v..Given a permutation π∈𝒮_m and an non-decreasingly ordered vector p⃗=(p_1,…,p_m)∈ [0,1]^m, that is p_j≤ p_j+1, the routine of the ERM associated with π and p⃗, namely , is as follows: [label=(*)]* First, we collect the reports of the agents and order them non-decreasingly. We denote with x⃗ the ordered vector containing the agents' reports, thus x_i≤ x_i+1. * Second, we elicit m positions on the line by setting y_j=x_p_j(n-1)+1 for every j∈ [m] and place the facility with capacity q_π(j) at y_j.* Finally, we assign every agent to the facility closer to the position they reported (break ties arbitrarily without overloading the facilities). Notice that, in the routine of the ERM, the vector p⃗ plays the role that t⃗ plays in the routine of a Ranking Mechanism. Indeed, the main difference between the Ranking Mechanisms and our generalization lies in how the mechanism matches the agents to the facilities.While the Ranking Mechanisms assign the agents monotonically from left to right, the ERM assigns every agent to the facility that is closer to their report.Depending on the pair (π,p⃗) and q⃗, however, the matching returned by the ERM might overload some of the facilities.We say that a couple (π,p⃗) induces a feasible ERM if, for every n∈ℕ and every x⃗∈^n, the output ofis a facility location for the m-CFLP induced by q⃗.Given a p.c.v. q⃗, the set of parameters (π,p⃗) that induce a feasible ERM is characterizable through a system of inequalities.For the sake of simplicity, we first consider the case in which p⃗ does not have two equal entries, that is p_j≠ p_l for every j≠ l. Given q⃗ and p⃗ such that p_j≠ p_i for every j≠ i∈[m], thenis feasible if and only if the following system of inequalities are satisfied q_π(1)≥ p_2q_π(2)≥ p_3-p_1⋮q_π(m-1)≥ p_m-p_m-2q_π(m)≥ 1-p_m-1.The feasibility of the ERM ensures also its truthfulness.Given a p.c.v. q⃗, any feasible is truthful. Thus it is also truthful in the Bayesian framework.The truthfulness of any ERM is bound to the fact that every facility y_j can accommodate all the agents between y_j-1 and y_j+1.This constraint limits the set of p⃗ for which there exists a π∈𝒮_m such thatis feasible.Given a p.c.v. q⃗∈(0,1)^m, let us fix p⃗∈[0,1]^m.Then, if (max_j∈[m]p_j-min_j∈[m]p_j)>∑_j∈[m]q_j-1, the mechanismis not feasible for every π∈𝒮_m. Finally, we notice that if p⃗ has at least two equal entries, not all the feasible ERMs necessarily satisfy system (<ref>).For example, themechanism that places all the facilities at the median agent is always feasible and truthful.[In this case, we do not need to specify which agent is served by which facility nor π∈𝒮_m, since all the agents and all the facilities are served/located at the same place.]However, depending on q⃗, the vector p⃗=(0.5,…,0.5) may not satisfy system (<ref>) for any π∈𝒮_m.Indeed, if p_j=p_l for some indices j,l∈ [m], we need to consider all the facilities placed at x_p(n-1)+1 as if they were a unique facility whose capacity is the total capacity of the facilities placed at x_p(n-1)+1.By doing so, we are able to extend Theorem <ref> and <ref> to the case in which p⃗ has at least two equal entries.In particular, given p⃗∈[0,1]^m, let p⃗'∈ [0,1]^m' be the vector containing all the different entries of p⃗, then, the mechanism ^(π,p⃗) is feasible if and only if the following system is satisfied∑_l∈[m]; s.t. p_π(l)=p_1'q_π(l)≥ p_2'∑_l∈[m]; s.t. p_π(l)=p_2'q_π(l)≥ p_3'-p_1'⋮ ∑_l∈[m]; s.t. p_π(l)=p_m-1'q_π(l)≥ p_m''-p_m'-2'∑_l∈[m]; s.t. p_π(l)=p_m''q_π(l)≥ 1-p_m'-1' .Lastly, we notice that, except in a few specific cases, the approximation ratio of the Ranking Mechanisms is unbounded.Consequentially, it is impossible to retrieve a bound on the approximation ratio of any ERM for generic m-CFLPs.Since the classic worst-case analysis does not give any insight into the performances of the ERMs, we move our attention to the Bayesian analysis. § THE BAYESIAN ANALYSIS OF THE ERMSIn this section, we present our main result, which characterizes the limit of the Bayesian approximation ratio of any feasible ERM as a function of p⃗, π, μ, and q⃗.As a preliminary lemma, we relate the m-CFLP to a norm minimization problem in the Wasserstein Space. Given a p.c.v. q⃗, let x⃗∈^n be the vector containing the agents' reports.Let us fix μ_x⃗:=1/n∑_i∈ [n]δ_x_i. Then, it holdsSC_opt(x⃗)=min_σ∈𝒮_mmin_ζ∈_σ,q⃗()W_1(μ_n,ζ),where _σ,q⃗() is the set of probability measures such that ζ=∑_j∈[m]ζ_jδ_y_j, where y_1≤ y_2≤…≤ y_m and ζ_j≤ q_σ(j) for every j∈[m].Similarly, given a permutation π∈𝒮_m and a vector p⃗ that induce a feasible ERM, it holds SC_π,p⃗(x⃗)=min_λ_j≤ q_π(j),∑_j∈[m]λ_j=1W_1(μ_n,λ),where SC_π,p⃗(x⃗) is the Social Cost attained byon instance x⃗, λ=∑_j∈[m]λ_jδ_x_r_j, and r_j=p_j(n-1)+1.The proof of Lemma <ref> consists in showing that, for any given instance x⃗ and given an optimal facility location for the m-CFLP, it is possible to construct a measure ν such that W_1(μ_n,ν)=SC_opt(x⃗) and, vice-versa, given a measure ν that minimizes (<ref>), it is possible to retrieve an optimal facility location problem for the m-CFLP.Since the m-CFLP admits a solution, problem (<ref>) is well-posed and admits a solution.By a similar argument, we infer the same conclusions for problem (<ref>).The connection between the m-CFLP and Optimal Transport theory highlighted in Lemma <ref> enables us to exploit the properties of the Wasserstein distances and to characterize the limit Bayesian approximation ratio of every feasible ERM. Given the p.c.v. q⃗, let p⃗∈ (0,1)^m and π∈𝒮_m be such thatis feasible.Then, it holdslim_n→ +∞[SC_π,p⃗(x⃗)]/[SC_opt(x⃗)]=W_1(μ,ν_p⃗)/W_1(μ,ν_m), where ν_m is a solution to the following minimization problemmin_σ∈𝒮_mmin_ζ∈_σ,q⃗()W_1(μ,ζ),and ν_p⃗ is a solution to min_λ_j≤ q_π(j),∑_j∈[m]λ_j=1W_1(μ,∑_j∈[m]λ_jδ_F_μ^[-1](p_j)). The proof consists of three steps: [label=(*)]* First, we show that the expected optimal Social Cost for the m-CFLP converges to the objective value of the minimization problem in (<ref>). * Second, we show that the expected Social Cost ofconverges to the objective value of the minimization problem in (<ref>). * We combine the two convergence results to retrieve (<ref>). The limit of the expected optimal Social Cost.Let ν_m be such that W_1(μ,ν_m)=min_σ∈𝒮_mmin_ζ∈_σ,q⃗()W_1(μ,ζ). From Lemma <ref>, we know that, for every n∈ℕ and for everyx⃗∈^n, there exists a ν_x⃗,m such that SC_opt(x⃗)=W_1(μ_n,ν_x⃗,m).By definition of ν_m, we have that W_1(μ,ν_m)≤ W_1(μ,ν_x⃗,m).Since W_1 is a distance, it holds W_1(μ,ν_m)≤ W_1(μ,ν_x⃗,m)≤ W_1(μ,μ_n)+W_1(μ_n,ν_x⃗,m).By rearranging the terms and by taking the expected value with respect to the distribution of X⃗, we obtain [W_1(μ,ν_m)]- [W_1(μ_n,ν_x⃗,m)]≤ [W_1(μ,μ_n)]. By a similar argument, we have that W_1(μ_n,ν_x⃗,m)≤ W_1(μ_n,ν_m)≤ W_1(μ,μ_n)+W_1(μ,ν_m), hence [W_1(μ_n,ν_x⃗,m)]-[W_1(μ,ν_m)]≤ [W_1(μ,μ_n)].We then infer that |[W_1(μ,ν_m)]-[W_1(μ_n,ν_x⃗,m)]|≤[W_1(μ,μ_n)].Since the right handside of this inequality converges to 0 as n goes to +∞ (see <cit.>), we infer that lim_n→∞ [W_1(μ_n,ν_x⃗,m)]=W_1(μ,ν_m),which concludes the first part of the proof.The limit of the expected Social Cost of the Mechanism. The argument used for this part is similar to the one used for the limit expected optimal Social Cost but more delicate.Indeed, in this case, the set on which we minimize the Wasserstein distance does depend on the agents' report x⃗.In particular, the sets on which are formulated problem (<ref>) and (<ref>) are, in general, different.To overcome this issue, we need to define two auxiliary probability measures, namely ϕ and ψ.Given x⃗∈^n, let ν_p⃗ and ν_x⃗, p⃗ be the solutions to problem (<ref>) and (<ref>), respectively.We define the measures ϕ=∑_j∈[m](ν_x⃗, p⃗)_jδ_y_j, where y_j is the support of the measure ν_p⃗.For every n∈ℕ and every x⃗∈^n, we have that W_1(μ,ν_p⃗)≤ W_1(μ,ϕ)≤ W_1(μ,μ_n)+W_1(μ_n,ν_x⃗, p⃗)+W_1(ν_x⃗, p⃗,ϕ).We therefore inferW_1(μ,ν_p⃗)-W_1(μ_n,ν_x⃗, p⃗)≤ W_1(μ,μ_n)+W_1(ν_x⃗, p⃗,ϕ). Similarly, given x⃗∈^n, we define ψ=∑_j∈[m](ν_p⃗)_jδ_y_x⃗,j, where {y_x⃗,j}_j∈[m] is the support of ν_x⃗, p⃗.We then haveW_1(μ_n,ν_x⃗, p⃗)-W_1(μ,ν_p⃗)≤ W_1(μ,μ_n)+W_1(ν_p⃗,ψ).Since the Wasserstein distance is always non negative, we can combine the estimations in (<ref>) and (<ref>), to obtain|W_1(μ_n,ν_x⃗, p⃗)-W_1(μ,ν_p⃗)|≤ W_1(μ,μ_n)+W_1(ν_p⃗,ψ)+W_1(ν_x⃗, p⃗,ϕ).If we take the expectation of both sides of the inequality the inequality still holds.Thus, if we show that lim_n→∞ [W_1(ν_p⃗,ψ)]=lim_n→∞ [W_1(ν_x⃗, p⃗,ϕ)]=0, we conclude this second step of the proof since, by <cit.>, we have that lim_n→∞ [W_1(μ_n,μ)]=0.Let us consider [W_1(ν_p⃗,ψ)], the convergence of [W_1(ν_x⃗, p⃗,ϕ)] follows by a similar argument.We notice that ψ and ν_p⃗ have different supports, but ψ_j=(ν_p⃗)_j, thus it holds [W_1(ν_p⃗,ψ)]≤∑_j∈[m]ψ_j [|y_j-y_x⃗,j|], where y_x⃗,j is the j-th point in the support of ν_x⃗, p⃗.By definition of ERM, it holds y_x⃗,j=x_p_j(n-1)+1.Since the (p_j(n-1)+1)-th order statistics converges to the p_j-th quantile of μ <cit.>, we have that [|y_j-y_x⃗,j|]→ 0 as n→∞, which concludes the second part of the proof.Characterizing the Bayesian approximation ratio.To conclude, notice that the distance between an absolutely continuous measure and a discrete measure is always greater than zero, thus lim_n→∞ [SC_opt(X⃗)]>0.For this reason, we have that the limit of the ratio is equal to the ratio of the limits, which proves (<ref>).Notice that our result applies only to feasible , such that p⃗∈(0,1)^m, since, for general measures μ, the values F_μ^[-1](0) and F_μ^[-1](1) might not be finite.Moreover, in Appendix <ref>, we show that Theorem <ref> allows to study the limit Bayesian approximation ratio of other mechanisms, such as the InnerPoint Mechanism or the Ranking Mechanisms. Finally, we notice that the Bayesian approximation ratio is invariant to positive affine transformation of μ.In particular, the limit of the Bayesian approximation ratio remains the same across all the Gaussian-distributed populations. Let q⃗ be a p.c.v. and let X be the random variable associated with μ.Given α>0 and β∈, let μ_α,β be the probability distribution associated with α X +β.Then, the asymptotical Bayesian approximation ratio of any feasible ERM is the same regardless of whether the agent type is distributed according to μ or μ_α,β.§ HOW TO SELECT AN OPTIMAL ERM As shown in Theorem <ref>, the limit Bayesian approximation ratio of any ERM hinges upon μ, q⃗, π, and p⃗.While μ and q⃗ are beyond the control of the mechanism designer, both π and p⃗ serve as parameters that can be tuned depending upon μ and q⃗.In this section, we study how to determine an optimal ERM tailored to μ and q⃗.Given μ and q⃗, we say that a feasibleis optimal if lim_n→∞B_ar()≤lim_n→∞B_ar(^(π',p⃗'))for any other feasible ^(π',p⃗').The main result of the section assesses that an optimal ERM exists and characterizes its parameters (π,p⃗) as the solution to a suitable minimization problem. Given μ and a p.c.v. q⃗, there always exist a tuple (π,p⃗) whose associated ERM, that is , is optimal.Moreover, the couple (π,p⃗) is a solution to the following minimization problemmin_π∈𝒮_mmin_p⃗∈ (0,1)^m W_1(μ,∑_j∈[m]η_jδ_F_μ^[-1](p_j)) such thatp_j+1-p_j-1≤ q_π(j)for j=2,…,m-1,p_1≤ q_π(q),and p_m≤ 1-q_π(m),where η_1=F_μ(F^[-1]_μ(p_2)+F^[-1]_μ(p_1)/2), η_j=F_μ(F^[-1]_μ(p_j+1)+F^[-1]_μ(p_j)/2)-F_μ(F^[-1]_μ(p_j)+F^[-1]_μ(p_j-1)/2) for j=2,…,m-1, and η_m=1-η_m-1. Notice that the limit Bayesian approximation ratio of the optimal ERM does not necessarily converge to 1, as the next example shows.Let μ be the uniform distribution 𝒰[0,1] and let q⃗=(0.8,0.4) be a p.c.v..Since μ is symmetric, one of the solutions to problem (<ref>) is ν_2=(0.4)δ_0.2+(0.6)δ_0.7 (the other one is ν_2'=(0.6)δ_0.3+(0.4)δ_0.8).However, by Corollary <ref>, there does not exist a permutation π∈𝒮_m such that ^((0.2,0.7), π) is feasible.Thus, by Theorem <ref>, no feasible ERM is such that lim_n→∞B_ar()=1. We now characterize the optimal ERM in two specific cases.In the first one, the total capacity of the facilities is the same as the number of agents.In the second one, we need to place two capacitated facilities and μ is a uniform distribution.§.§ The No-Spare Capacity Case In the no-spare capacity case, the total capacity of the facilities matches the number of agents, thus ∑_j∈[m]q_j=1.Due to Corollary <ref>, we have that the only feasible ERMs are the ones for which it holds p_j=p∈[0,1] for every j∈[m].Thus, owing to the properties of the W_1 distance and to Lemma <ref>, the optimal ERM is themechanism, i.e. ^(Id,(0.5,…,0.5)). In the no-spare capacity case, the optimal ERM is unique and is themechanism.When the agents are distributed following a uniform distribution, that is μ=𝒰[0,1], we can express the limit Bayesian approximation ratio of themechanism as a function of q⃗.Indeed, since ∑_j∈[m]q_j=1, any solution to problem (<ref>) separates the interval [0,1] into m intervals whose length is q_j.Notice that the order in which [0,1] is divided is irrelevant.Furthermore, the facility is placed at the median of such interval, thus the objective value of (<ref>) is 1/4∑_j∈[m]q_j^2.By Theorem <ref>, the limit Bayesian approximation ratio of themechanism is lim_n→∞B_ar()=(∑_j∈[m] q_j^2)^-1, since the asymptotic cost of placing all the facilities at the median point is ∫_0^1|x-0.5|dx=1/4.Notice that the limit Bayesian approximation ratio gets closer to 1 as the values of q⃗ become concentrated at one index.Conversely, if all the facilities have the same capacity 1/m, we have that the Bayesian approximation ratio of thebecomes the largest possible, that is lim_n→∞B_ar()=m.Comparing the Ranking Mechanisms with the ERMs To the best of our knowledge, the only other truthful mechanisms capable of placing more than 2 facilities in the no-spare capacity case are the Ranking Mechanisms <cit.>.To close the section, we show that themechanism is also the asymptotically best possible Ranking Mechanism.Indeed, by <cit.>, we know that a truthful Ranking Mechanism either [label=(*)]* puts all the facilities at the same position or* places all the facilities at two adjacent agents' reports, namely x_t and x_t+1.However, not all the values of t are feasible, as it must exist J'⊂ [m] such that t=∑_j∈ J'c_j. Owing to Theorem <ref>, in case (i), the best possible mechanism is themechanism.In case (ii) the limit Bayesian approximation ratio of any mechanism is either equal or higher (see Appendix <ref>).Indeed, consider a population distributed as a Uniform distribution, i.e. μ=𝒰[0,1], and q⃗=(1/3,1/3,1/3), and the Ranking Mechanism that places q_1 at x_n-1/3+1 and the other two facilities at x_n-1/3+2.Since every agent is assigned to its closest facility, it is possible to adapt Lemma <ref> and Theorem <ref> and show that the limit of the expected Social Cost of the mechanism is W_1(μ,δ_1/3)=5/18. Hence, the limit Bayesian approximation ratio of the considered Ranking Mechanism is ∼ 3.33, while lim_n→∞B_ar()=3. §.§ Placing two capacitated facilities for a uniform population.In this section, we retrieve the optimal ERMs when μ=𝒰[0,1] and m=2.For the sake of simplicity, we divide the discussion into three steps.First, we demonstrate that both permutations in 𝒮_2 yield equally optimal ERMs.Second, we explicit the objective function of problem (<ref>) as a function of p⃗ and compute its derivatives.Lastly, we retrieve the optimal ERM for any given q⃗ using Lagrangian multipliers.In Appendix <ref>, we show how to generalize this process to the case in which μ is symmetric and other relevant frameworks.First, notice that, since F_𝒰[0,1]^[-1](p)=p for every p∈ [0,1], the objective value of problem (<ref>) boils down to𝒲(p⃗):=W_1(μ, p_1+p_2/2δ_p_1+(1-p_1+p_2/2)δ_p_2).Step 1. Since m=2, 𝒮_2={Id,θ}, where the permutation Id is such that Id(i)=i, and θ switches 1 and 2.First, we show if (Id,p⃗) satisfies system (<ref>), there exists a vector p⃗' such that (θ,p⃗') satisfies (<ref>) and 𝒲(p⃗)=𝒲(p⃗').Given p⃗=(p_1,p_2), let us define p⃗'=(p_1',p_2')=(1-p_2,1-p_1).If (Id,p⃗) satisfies system (<ref>), we have that p_2≤ q_Id(1)=q_1 and 1-p_1≤ q_Id(2)=q_2.It is then easy to see that p_2'= 1-p_1 ≤ q_2=q_θ(1) and, likewise 1-p_1'≤ q_θ(2).Thus, (θ,p⃗') satisfies (<ref>).Finally, due to the symmetry of μ with respect to 0.5, we have that 𝒲(p⃗)=𝒲(p⃗').Therefore, for both Id and θ, there exists a vector p⃗ that induces an optimal ERM. Step 2. Let us fix π=Id. Due to the properties of the optimal transportation plan on the line <cit.>, have that 𝒲(p⃗)=∫_0^1 min{|x-p_1|,|x-p_2|}dx, thus 𝒲(p⃗)=p_1^2/2+(1-p_2)^2/2+(p_2-p_1)^2/4.From a simple computation, we infer that the objective function 𝒲(p⃗) is differentiable and retrieve the formula of its gradient as ∇𝒲(p⃗)=1/2(3p_1-p_2,3p_2-2-p_1).Notice that the set of points on which the first derivative of 𝒲 nullifies is the line p_2=3p_1, while the set of points on which the second derivative of 𝒲 nullifies is p_2=1/3(p_1+2), thus ∇𝒲(p⃗)=(0,0) if and only if p⃗=(0.25,0.75). Step 3. Lastly, to detect an optimal ERM, we need to implement the feasibility constraints described in Theorem <ref>.From Step 1, we focus on the set of constraints induced by Id∈𝒮_m.In this case, we have that p⃗ must satisfy the following constraints [label=(*)]* p_2≤ q_1,* 1-p_1≤ q_2 that is 1-q_2≤ p_1, and* p_1≤ p_2.Thus the set of feasible p⃗ is a triangle, namely T(q⃗), whose vertexes are (q_1,q_1), (1-q_2,1-q_2), and (1-q_2,q_1).Finally, we retrieve the best vector p⃗ depending on the value of q⃗.First, notice that if (0,25,0.75)∈ T(q⃗), i.e. if and only if q_1,q_2≥ 0.75, then the minimum is attained in p⃗=(0,25,0.75) as it is the only point in which the gradient ∇𝒲 nullifies and the Hessian matrix is positive definite.For all the other values of q⃗, we need to search the minimum over the boundary of T(q⃗), since the gradient is never null on the interior of T(q⃗).First, notice that the minimum cannot lay in the segment connecting the vertexes (1-q_2,1-q_2) and (q_1,q_1).In this case, we have that ∇𝒲(p⃗)= 1/2(2p,2p-2), hence the gradient is never perpendicular to the line p_1=p_2.We then need to search for the minimum in the sets {(1-q_2,t) s.t. t∈[1-q_2,q_1]} and {(t,q_1) s.t. t∈[1-q_2,q_1]}.Let q⃗ be a p.c.v. such that q_2≤ q_1, then an optimal ERM for a uniformly distributed population is induced by (Id,p⃗) where [label=(*)]* p⃗=(0.25,0.75) if q_2≥ 0.75, * p⃗=(1-q_2,q_1) if 3q_2-2-q_1≤ 0, and* p⃗=(1-q_2,1-q_2/3) otherwise.Lastly, we compare the performances of an optimal ERM and the Extended Endpoint Mechanism (EEM), introduced in <cit.>.Bayesian Analysis of the EEMThe EEM is a truthful mechanism that can locate any two capacitated facilities.The routine of the EEM is as follows.Given x⃗=(x_1,…, x_n) a vector containing the agents' reports ordered non-decreasingly, we define A_1={x_i s.t. |x_i-x_1|≤|x_1-x_n|/2} and A_2={x_i s.t. |x_i-x_n| > |x_1-x_n|/2}.Depending on the cardinality of the sets A_i and the capacities of the facilities, the EEM determines the positions of the facilities following one of six possible routines (see Appendix <ref>).To analyze the convergence of the Bayesian approximation ratio of the EEM we need to study all the costs of all the six possible outcomes, weight them by the likelihood of them occurring, and take the limit for n goes to +∞.[Since the EEM cannot be phrased as an ERM, we cannot rely on Theorem <ref>.]For instance, let us consider Example <ref>.In this case, we have that, as the number of agents increases, the EEM will either* place the facility with capacity q_2 at y_1=x_1 and the other facility at y_2=2x_n-(q_1(n-1)+1)-x_1 with probability 0.5 or* place the facility with capacity q_2 at y_2=x_n and the other facility at y_1=2x_q_1(n-1)+2-x_n with probability 0.5. Since μ=𝒰[0,1], we have that x_1→ 0, x_n→ 1, x_n-q_2(n-1)+2→ F_μ^[-1](1-q_2)=1-q_2, and x_q_2(n-1)+1→ F_μ^[-1](q_2)=q_2.In particular, the limit expected Social Cost of the mechanism is then 1/2W_1(μ,q_2δ_2q_2-1+(1-q_2)δ_1)+1/2W_1(μ,(1-q_2)δ_0+q_2δ_2-2q_2).Since μ is symmetric with respect to 1/2, we have that W_1(μ,q_2δ_2q_2-1+(1-q_2)δ_1)=W_1(μ,(1-q_2)δ_0+q_2δ_2-2q_2)=(1-q_2)^2/2+q_2(2-3q_2)/2=0.34, hence the limit Bayesian approximation ratio of the EEM is ∼ 2.62.By Theorem <ref>, the optimal ERM is the one induced by p⃗=(0.6,0.8), thus its limit Bayesian approximation ratio is ∼ 1.62.The sub-efficiency of the EEM with respect to an optimal ERM is due to the fact that the EEM places both the facilities outside the interval (x_1,x_n), which, in the limit, leads to a loss of efficiency.Moreover, it is worth to notice that the convergence of the expected Social Cost of the EEM depends on the convergence of the first or last order statistic. § NUMERICAL EXPERIMENTSIn this section, we complement our theoretical study of the ERMs by running several numerical experiments.Indeed, most of our results pertain to the limit analysis of the mechanism. For this reason, we want to test two aspects of the ERMs.First, we want to compare the Bayesian approximation ratio of the ERMs with the Bayesian approximation ratios of other truthful mechanisms when the number of agents n is small.Since the Ranking Mechanisms are a subset of the ERMs and have been discussed in Section <ref>, we only consider the EEM and, when possible, the IG.Second, we want to evaluate the convergence speed of the Bayesian approximation ratio of .More specifically, we want to assess how close the Bayesian approximation ratio of an ERM and the limit detected in Theorem <ref> are when the number of agents is small. We run our experiments for different distributions μ and percentage capacity vector q⃗. All the experiments are performed in Matlab 2023a on macOS Monterey system with Apple M1 Pro CPU and 16GB RAM.The code is available on <https://anonymous.4open.science/r/Bayesian-CFLP-38D5/>.Experiment setup Throughout our experiments, we sample the agents' positions from three different probability distributions: the uniform distribution 𝒰[0,1], standard normal distribution 𝒩(0,1), and exponential distribution Exp(1).Owing to Corollary <ref>, we do not consider other parameter choices since testing the mechanisms over the standard Gaussian distribution is the same as testing over a generic normal distribution 𝒩(𝔪,σ^2). Since our goal is to compare the performances of the ERMs to other relevant mechanisms, we limit our tests to cases in which m=2, as all the known truthful mechanisms different from the Ranking Mechanisms operate only under this restriction.We consider different percentage capacity vectors q⃗∈ (0,1)^2. Specifically, we consider balanced capacities q⃗=(q,q) and unbalanced capacities q⃗=(q_1, q_2), q_1 ≠ q_2. For the case of balanced capacities, we consider q=0.7,0.8 and 0.9. For the case of unbalanced capacities, we consider the slightly unbalanced capacities i.e. q⃗=(0.85,0.75), and extremely unbalanced capacities i.e. q⃗=(0.8,0.4), (0.85,0.35).As benchmark mechanisms, we consider the EEM <cit.> and, when possible, the IG <cit.>.Experiment results – Comparison with the EEM and the IG We first consider the case of balanced capacities q⃗=(q,q), in which we are able to compare the Bayesian approximation ratio of three different mechanisms for the m-CFLP: the EEM, the IG, and the ERMs.Regardless of the distribution, we consider the optimal ERM with respect to the uniform distribution, i.e. , with π=Id and p⃗=(max{0.25,1-q},min{0.75,q}).Figure <ref> (and Table <ref> in Appendix <ref>) shows the average and the 95% confidence interval (CI) of Bayesian approximation ratio for n=10,20,30,40,50.Each average is computed over 500 instances.We observe that, in most cases, the ERM achieves the lowest Bayesian approximation ratio comparing to the other two mechanisms. When q=0.7, the ERM is still better than the EEM but slightly worse than the IG. However, the empirical Bayesian approximation ratio of the ERM and IG converges to the same value as the number of agents increases. Next, we consider the case of unbalanced capacities, where q_1≠ q_2, specifically q⃗=(0.85,0.75),(0.8,0.4),(0.85,0.35).Since the IG requires the two capacities to be identical, we compare only the ERM and the EEM.Amongst the possible ERMs, we select the one optimal with respect to the Uniform distribution, obtained via Theorem <ref>.Thus the parameters ofare [label=(*)]* π=Id for every q⃗ and* p⃗=(0.25,0.75) for q⃗=(0.85,0.75), p⃗=(0.6,0.8) for q⃗=(0.8,0.4), and p⃗=(0.65,0.85) for q⃗=(0.85,0.35). In this case, we consider only symmetric probability distributions, i.e. the Gaussian and the Uniform distribution.Figure <ref> (and Table <ref> in Appendix <ref>) shows the average and the 95% CI of Bayesian approximation ratio computed over 500 instances.Whenever n≥ 20, the ERM has a much lower approximation ratio.It is particularly interesting the case q⃗=(0.75,0.85). In this case, the ERM is optimal in both cases, and its limit Bayesian approximation ratio is 1.Indeed, the Bayesian approximation ratio of the ERM is almost equal to 1 for every n≥10 and gets closer as n increases, however the Bayesian approximation ratio of the EEM is always ≥ 1.79 and gets worse as n increases.Experiment results – Convergence speed of the limit Bayesian approximation ratio. Lastly, we test how close the Bayesian approximation ratio of the ERM is to the limit detected in Theorem <ref>. That is, we calculate the relative error aserr_rel=empiricalB_ar-limit ofB_ar/limit ofB_ar. Figure <ref> (and Table <ref> in Appendix <ref>) show the relative error for the six cases.Each average is computed over 500 instances.We observe that the relative error decreases as the number of agents increases, regardless of the distribution or the p.c.v. q⃗.Moreover, in all cases but q⃗=(0.8,0.4),(0.85,0.35), the relative errors of the ERM are less than 0.05 as long as n ≥ 20, which validates Theorem <ref> as a tool to predict the Bayesian approximation ratio for small values of n.In other cases, we observe a slower convergence, as the percentage error is slightly larger and its highest value is 0.33. § CONCLUSION AND FUTURE WORKSIn this paper, we introduced the Extended Ranking Mechanisms, a generalization of Ranking Mechanisms introduced in <cit.>.After establishing the conditions under which ERMs remain truthful, we examined them from a Bayesian Mechanism Design perspective. Specifically, we characterized the limit of the Bayesian approximation ratio for every truthful ERM in terms of the probability distribution μ, the p.c.v. q⃗, and the mechanism's parameters, namely π and p⃗.We also delved into the study of optimal ERMs. We have shown that such a mechanism exists and characterized it via a minimization problem, which we solved in two relevant frameworks.Lastly, we conducted extensive numerical experiments to validate our findings, from which we inferred that a well-tuned ERM consistently outperforms all other known mechanisms. For future works, we aim to extend our studies to include other relevant metrics, such as the Maximum Cost or the l_p-costs.Another interesting extension would be to extend the ERMs to handle problems in higher dimensions using the decomposition proposed in <cit.>.Lastly, we plan to study the connections between the Optimal Transportation problem and other Mechanism Design problems. Gennaro Auricchio and Jie Zhang are partially supported by a Leverhulme Trust Research Project Grant (2021 – 2024). Jie Zhang is also supported by the EPSRC grant (EP/W014912/1).ACM-Reference-Format § APPENDIX In this appendix, we report all the missing material.In Appendix <ref>, we report the missing proofs.In Appendix <ref>, we study the optimal ERM for symmetric measures.In Appendix <ref>, we report the Bayesian study of the InnerPoint Mechanism and Extended Endpoint Mechanism.In Appendix <ref>, we report the additional numerical results.§ MISSING PROOFSLet x⃗ be the vector that contains the agents' reports ordered in a non-decreasing order.First, we show that every time the system (<ref>) holds, thendoes not overload any facility.By definition of , we have that every agent is assigned to its closest facility.Furthermore, every facility shares its position with an agent, depending on the vector p⃗. Let us now focus the j-th facility, which, by definition, it is placed at y_j=x_p_j (n-1)+1.Without loss of generality, let us assume that 1<j<m, thus there are two facilities, namely y_j-1 and y_j+1, such that y_j-1≤ y_j≤ y_j+1.The maximum number of agents whose report is closer to y_j is p_j+1(n-1)-p_j-1(n-1)-1.Since the capacity of the j-th facility is q_π(j) (n-1)+1, we have that the facility at y_j cannot be overloaded if and only if q_π(j)(n-1)+1≥p_j+1(n-1)-p_j-1(n-1)-1.We now show that the latter condition is implied by q_π(j)≥ p_j+1-p_j-1 or, equivalently, q_π(j)(n-1)≥ (p_j+1-p_j-1)(n-1).Indeed, it is easy to see that q_π(j)(n-1)+1≥ q_π(j)(n-1)≥q_π(j)(n-1), thus we haveq_π(j)(n-1)+1 ≥ q_π(j)(n-1)≥ (p_j+1-p_j-1)(n-1)≥p_j+1(n-1)-p_j-1(n-1)-1. Through a similar argument, we deal with y_1 and y_m.We now show the inverse implication, i.e. if (π,p⃗) satisfies system (<ref>), thenis feasible.Toward a contradiction, assume that one of the inequalities does not holds.Without loss of generality, assume that p_2>q_π(1), there then exists a value N∈ℕ such that p_2-2/N-1>q_π(1).Let us then consider n=N, then it holds p_2(N-1)-1>q_π(1)(N-1), hence p_2(N-1)>q_π(1)(N-1)+1.Let us then consider the instance x⃗∈ℝ^N defined as x_i=0 if i=1,…,p_2(N-1) and x_i=1 otherwise.On this instanceplace a facility at y_1=0 and all the others at 1, the facility at 0 has capacity q_π(1)(N-1)+1.By definition, the number of agents whose closest facility is y_1 is p_2(N-1)>q_π(1)(N-1)+1, which contradicts the feasibility of the mechanism.Let q⃗ be a p.c.v. and letbe a feasible ERM.Toward a contradiction, let x⃗ be an instance in which an agent is able to manipulate.Without loss of generality, let us assume that the manipulative agent's real position is x_i.Since every agent is assigned to the facility that is closer to the position they report, the manipulation performed by the agent at x_i must alter the position of the facilities, as otherwise the cost of the manipulative agent cannot decrease.We notice that the positions of the facilities are determined through the same routine of the Percentile mechanisms, <cit.>.Owing to the truthfulness of the Percentile mechanisms for the m-FLP, we infer that no agent can misreport in such a way that a facility gets closer to its position, which concludes the proof.It follows from Theorem <ref>, indeed ifis feasible, then (<ref>) holds.If we sum all the inequalities of system (<ref>), we get∑_j∈[m]q_π(j)=∑_j∈[m]q_j ≥ p_2+(p_3-p_1)+……+(p_m-p_m-2)+(1-p_m-1)=1+p_m-p_1,since p⃗ is ordered increasingly.Thus, if p_m-p_1=(max_j∈[m]p_j-min_j∈[m]p_j)>∑_j∈[m]q_j-1holds, system (<ref>) does not, which concludes the proof. First, we show that (<ref>) holds.Given x⃗ and q⃗, let us consider an optimal solution to the m-CFLP.In particular, we denote with y⃗ be the vector containing m positions on the line, σ the permutation that determines how to distribute the facilities among the positions in y⃗, and Γ a matching that assigns every agent to a facility without overloading the facilities.Let us now consider the following transportation planπ_x_i,y_j=1/nif (i,j)∈Γ,0otherwise.By definition of Γ, we have that ∑_j∈[m]π_x_i,y_j=1/n and ∑_i∈[n]π_x_i,y_j=k_j/n≤c_σ(j)/n=q_σ(j),where k_j is the degree of j∈[m] according to Γ.Thus, if we set ν_n,m=∑_j∈[m]k_j/nδ_y_j, we have that ν_n,m∈_σ,m().It is easy to see that SC(x⃗)=∑_i∈[n],j∈ [m]|x_i-y_j|π_x_i,y_j≥ W_1(μ_n,ν_n,m), henceSC_opt(x⃗)≥min_σ∈𝒮_mmin_ζ∈_σ,m() W_1(μ_n,ζ).To conclude, we show that the inverse inequality holds. Let (σ,ν_m,n) be a minimizer of the right hand side of (<ref>), we show that there exists a set of m locations y⃗, a permutation σ', and a matching Γ whose Social Cost is W_1(μ_n,ν_n,m).First, let us set y⃗=(y_1,…,y_m), where {y_j}_j∈ [m] is the support of ν_m,n.Without loss of generality, we assume that q_j=c_j/n, thus there exists an optimal transportation plan π between μ_n and ν_m,n such that for every i∈[n], there exists a unique j∈[m] such that π_x_i,y_j≠ 0, see <cit.>.Notice that, by definition of transportation plan, if π_x_i,y_j≠ 0 for only a couple of values i∈[n], j∈[m], then π_x_i,y_j=1/n.Thus, the set Γ={(i,j)∈ [n]× [m]; π_x_i,y_j=1/n} is well defined.Since for every i∈[n] we have that there exists a unique j∈[m] for which it holds π_x_i,y_j=1/n, we have that the degree of every i∈[n] is one.Since ∑_i∈[n]π_x_i,y_j=(ν_m,n)_j≤c_σ(j)/n, we have that the degree of every j∈[m] is less than c_σ(j).Thus the triplet (y⃗,σ,Γ) is a feasible location for the m-CFLP.Since the Social Cost of (y⃗,σ,μ) is equal to ∑_i∈[n],j∈[m]|x_i-y_j|π_x_i,y_j=W_1(μ_n,ν_m,n), we have thatmin_σ∈𝒮_mmin_ζ∈_σ,m() W_1(μ_n,ζ)=W_1(μ_n,ν_m,n)≥ SC_opt(x⃗),which concludes the proof. Notice that, since min_σ∈𝒮_mmin_ζ∈_σ,m() W_1(μ_n,ζ)= SC_opt(x⃗) holds, we can refine the previous argument and show that, given an optimal facility location for the m-CFLP it is possible to build a solution to the minimization problem in the right hand side of (<ref>).Vice-versa, given a solution to the minimization problem in the right hand side of (<ref>), it is possible to build an optimal facility location for the m-CFLP. Using the same argument and the same constructions, it is possible to show that (<ref>) holds. To complete the proof of Theorem <ref>, we need to prove the following: [label=(*)]* First, we need to show that a solution ν_m to problem (<ref>) always exists. This would complete the proof of the first step.* We need to show that there always exists a solution ν_p⃗ to problem (<ref>), and* we need to show that lim_n→∞[W_1(ν_x⃗, p⃗,ϕ)]→ 0 when n→∞.(i) Existence of a solution ν_m. Since for every m the number of permutation is finite, it suffices to show that, for every σ∈𝒮_m there exists a solution to the problemmin_ζ∈_σ,m()W_1(μ,ζ).Since μ has finite first moment, i.e. ∫_ |x|dμ<+∞, the minimization problem is well-posed and its minimum is finite.Moreover, the set _σ,m() is a closed set in (), thus a solution to the minimization problem does exist (although it may not be unique).(ii) Existence of a solution ν_p⃗. We show this point by constructing a minimizer to problem (<ref>).Given p⃗, let us set y_j=F_μ^[-1](p_j).For every λ=∑_j∈[m]λ_jδ_y_j such that ∑_j∈[m]λ_j=1, we have thatW_1(μ,λ)=∑_j∈[m]∫_F_μ^[-1](∑_l=1^j-1λ_l)^F_μ^[-1](∑_l=1^jλ_l)|x-y_j|dμ≥∫_min_j∈[m]|x-y_j|dμ.Let us now consider the measureλ_min=∑_j∈[m](F_μ(z_j)-F_μ(z_j-1))δ_y_j,where z_j=y_j+1+y_j/2 if j=2,…, m-1, z_0=-∞ and z_m=+∞.We then haveW_1(μ,λ_min) =∑_j∈[m]∫_F_μ^[-1](∑_l=1^j-1F_μ(z_l)-F_μ(z_l-1))^F_μ^[-1](∑_l=1^jF_μ(z_l)-F_μ(z_l-1))|x-y_j|dμ=∑_j∈[m]∫_z_j-1^z_j|x-y_j|dμ=∫_min_j∈[m]|x-y_j|dμ.Thus λ_min is a minimizer.(iii) Convergence of [W_1(ν_x⃗, p⃗,ϕ)]. We recall that ϕ=∑_j∈[m](ν_x⃗, p⃗)_jδ_y_j.Since (ν_x⃗, p⃗)_j≤ 1 for every j∈[m], we infer[W_1(ν_x⃗, p⃗,ϕ)]≤∑_j∈[m][|y_x⃗,j-y_j|],where y_x⃗,j=x_p_j(n-1)+1 and y_j=F_μ^[-1](p_j).Using Bahadur's representation formula (see <cit.> and <cit.>), we have thatX_(n-1)p_j+1-F_μ^[-1](p_j)=S_n(F_μ^[-1](p_j))-p_j/ρ_μ(F_μ^[-1](p_j))+R_n,where R_n is the rest of the Bahadur's formula, for which holds R_n≤ O(n^-3/4) with probability 1, ρ_μ is the density of μ, and S_n(t)=1/n∑_i=1^n 𝕀(X_i)_{X_i≤ t} where 𝕀(X_i)_{X_i≤ t}= 1 if X_i≤ t,0 otherwise.Since by hypothesis the density of μ is always strictly positive on the interior of the support of μ and p_j∈ (0,1) for every j=1,…,m, we have[|y_x⃗,j-y_j|] ≤ K [|S_n(F_μ^[-1](p_j))-p_j|]+[|R_n|]≤ K[1/n|∑_i=1^n(𝕀(X_i)_{X_i≤ F_μ^[-1](p_j)}-p_j)|]+ Cn^-3/4≤ K[|∑_i=1^n(𝕀(X_i)_{X_i≤ F_μ^[-1](p_j)})-p_j)/n|]+ Cn^-3/4 ≤K/√(n)Var(𝕀(X_1)_{X_1≤ F_μ^[-1](p_j)})+ Cn^-3/4,since the X_i are i.i.d. and 𝕀(X_i)_{X_i≤ F_μ^[-1](p_j)} is a Bernoulli variable that with probability p_j is equal to 1 and equal to 0 otherwise hence its variance is finite.We then conclude that [W_1(ν_x⃗, p⃗,ϕ)] converges to 0.Similarly, it is possible to show that [W_1(ν_p⃗,ψ)] converges to 0.Let ν_m be the solution to (<ref>) and let ν^(α,β)_m be the solution tomin_σ∈𝒮_mmin_ζ∈_σ,m()W_1(μ_α,β,ζ)Let us fix L(x)=α x +β.Since α>0, L is a bijective and monotone-increasing function.We denote the inverse function of L with H.It is well-known that if X is a random variable whose law is μ, then the law of L(X)=α X+β is L_#μ.We recall that L_#μ is the pushforward of the measure μ, defined as L_#μ(A)=μ(H(A)), see <cit.>.By definition, we have that L_#δ_x=δ_α x + β, thus L_#(∑_j∈ [m]λ_jδ_x_j)=∑_j∈ [m]λ_jδ_α x_j + β.We now show that ν^(α,β)_m=L_#ν_m.First, notice that, since α>0 the function L is monotone increasing, thus if ν_m∈_σ,m(), then L_#ν_m∈_σ,m().Toward a contradiction, let us now assume that γ is such thatW_1(γ,L_#μ)<W_1(L_#ν_m,L_#μ).Let us now define η=H_#γ∈_k().Since H is also monotone increasing, we have γ∈_σ,m() and γ=L_#η.Furthermore, by the properties of the Wasserstein distance, we haveW_1(L_#η,L_#μ)=α W_1(η,μ)and W_1(L_#ν_m,L_#μ)=α W_1(ν_m,μ).In particular, we getW_1(η,μ)<W_1(ν_m,μ),which contradicts the optimality of ν_m.We therefore conclude that L_#ν_m=ν^(α,β)_m and that W_1(ν^(α,β)_mμ_α,β)=α W_1(ν_m,μ).To conclude, we observe thatmin_λ_j≤ q_σ(j),∑_j∈ [m]λ_j=1 W_1(μ_α,β,∑_j∈[m]λ_jδ_α y_j+β)=αmin_λ_j≤ q_σ(j),∑_j∈ [m]λ_j=1W_1(μ,∑_j∈[m]λ_jδ_y_j),for any set of m values {y_j}_j∈[m].We then observe that F^[-1]_μ_α,β(p_j)=α F^[-1]_μ(p_j) +β, thusmin_λ_j≤ q_σ(j),∑_j∈ [m]λ_j=1W_1(μ_α,β,∑_j∈[m]λ_jδ_α F^[-1]_μ(p_j)+β)=min_λ_j≤ q_σ(j),∑_j∈ [m]λ_j=1W_1(μ_α,β,∑_j∈[m]λ_jδ_F^[-1]_μ_α,β(p_j)).Finally, we notice that by taking the ratio of the two expectations, the factor α evens out.Thus the limit of the Bayesian approximation ratio of any ERM does not depend on the values of α nor β.Since for every m there are a finite number of π∈𝒮_m, it suffices to show that for every σ∈𝒮_m there exists a solution to problemmin_p⃗∈ (0,1)^m W_1(μ,∑_j∈[m]η_j(p⃗)δ_F_μ^[-1](p_j)) such thatp_j+1-p_j-1≤ q_π(j)for j=2,…,m-1,p_1≤ q_π(q),and p_m≤ 1-q_π(m),where η_j(p⃗)= F_μ(F^[-1]_μ(p_j+1)+F^[-1]_μ(p_j)/2)-F_μ(F^[-1]_μ(p_j)+F^[-1]_μ(p_j-1)/2) for j=2,…,m-1,η_1(p⃗)=F_μ(F^[-1]_μ(p_2)+F^[-1]_μ(p_1)/2),and η_m(p⃗)=1-η_m-1(p⃗).By the same argument used during the second step of proof of Theorem <ref>, we have that, given p⃗, the measure η(p⃗)=∑_j∈[m]η_j(p⃗)δ_F_μ^[-1](p_j)is such thatW_1(μ,η(p⃗))=min_λ_j≤ q_σ(j),∑_j∈[m]λ_j=1W_1(μ,∑_j∈[m]λ_jδ_F_μ^[-1](p_j)).First, we consider the case in which μ has a compact support, hence the minimization problem is well defined over the closed set [0,1]^m, since the values F_μ^[-1](0) and F_μ^[-1](1) are well-defined.Notice that [0,1]^m is compact, thus if we show that the objective function of the problem is continuous with respect to p⃗ we conclude the proof for this case.Let p⃗^(n) be a succession of feasible vectors that converges to p⃗.By the assumptions on μ, we have that both F_μ and F_μ^[-1] are continuous, hencep⃗→η_j(p⃗)is continuous for every j∈ [m].Likewise, the functionp⃗→ y_j(p⃗):=F_μ^[-1](p_j)is continuous for every j∈[m].As a consequence the sequence η_n=η(p⃗^(n)) weakly converges to η(p⃗) in (), <cit.>.Since W_1 induces the weak convergences <cit.> over (), we inferlim_n→∞W_1(μ,η(p⃗^(n)))=W_1(μ,η(p⃗)).To conclude the proof, we need to show that the minimizer, namely p⃗, is such that p_j≠ 0,1 for every j∈ [m].Toward a contradiction, let us assume that p_1=0.Without loss of generality, let us assume that p_2>0, thus y_1< y_2.By definition, we have that F_μ^[-1](p_1)=a and F_μ(F_μ^[-1](p_1))=p_1=0.Let us denote with [a,b] the support of μ, then there exists a ϵ>0 such that a+ϵ<F_μ^[-1](p_2), thus𝒲(p⃗) =∫_a^bmin_j∈[m]|x-F_μ^[-1](p_j)|dμ=∫^a+F_μ^[-1](p_2)/2_a(x-a)dμ+∑_j>1^m∫_a+y_2/2^bmin_j>1|x-F_μ^[-1](p_j)|dμ>∫_a^a+F_μ^[-1](p_2)/2|x-(a+ϵ)|dμ+∑_j>1^m∫_a+F_μ^[-1](p_2)/2^bmin_j>1|x-F_μ^[-1](p_j)|dμ>∫_a^bmin_j∈[m]|x-F_μ^[-1](p^(ϵ)_j)|dμ=𝒲(p⃗^(ϵ)),where p⃗^(ϵ)=(F_μ(a+ϵ),p_2,…,p_m), which contradicts the optimality of p⃗.Let us now consider the case in which μ has unbounded support.Let p⃗^(n) be a minimizing sequence.Up to a sub-sequence, we can assume that the p⃗^(n) converges to p⃗ in [0,1]^m.We now show that p⃗∈ (0,1)^m.First, let us assume that there exists a j∈[m] for which p_j∈ (0,1).Without loss of generality, let us assume that p_1=0 and that 1>p_2>0 (if there are multiple entries of p⃗ that are equal to 0 the study is the same).Let us now consider v⃗^(n) be such that v^(n)_1=p_1^(n) and v_j^(n)=p_j for j>1.Since v⃗^(n) and p⃗^(n) have the same limit and 𝒲 is continuous, we have that v⃗^(n) is a minimizing sequence.Since p^(n)_1→ 0, we have that F_μ^[-1](p_1^(n))→ -∞, thus𝒲(p⃗^(n)) =∫_-∞^y_1^(n)+y_2/2|x-F_μ^[-1](p_1^(n))|dμ+∫_y_1^(n)+y_2/2^+∞min_j>1|x-F_μ^[-1](p_j)|dμ. Since F_μ^[-1](p_1^(n))→ -∞, we have that𝒲(p⃗)=∫_-∞^-∞min_j>1|x-F_μ^[-1](p_j)|dμ.Let us consider the vector p⃗^(ϵ)=(ϵ,p_2,…,p_m), where p_2>ϵ>0, then it holds𝒲(p⃗) =∫_-∞^-∞min_j>1|x-F_μ^[-1](p_j)|dμ>∫_-∞^-∞min_j∈[m]|x-F_μ^[-1](p^(ϵ)_j)|dμ=𝒲(p⃗^(ϵ)),which contradicts the optimality of p⃗.Through a similar argument, we deal with the case p⃗∈{0,1}^m. Finally, we notice that the case in which the support of μ is unbounded only from one side is similar.By Corollary <ref>, we have that any truthful ERM, namely , must be such that p⃗=(p,p,…,p) where p∈(0,1).Thus to complete the proof it is sufficient to prove that the functionM:y→∫_|x-y|dμ is decreasing on (-∞,med(μ)) and increasing on (med(μ),+∞).We do that by computing the derivative of M with respect to y.Without loss of generality, let us assume that y∈(-∞,med(μ)) and let us considerlim_h→ 0^+∫_|x-(y+h)|dμ-∫_ |x-y|dμ/h.By the linearity of the integral, we have that∫_ |x-y|dμ=∫_-∞^y (y-x)dμ+∫_y^+∞(x-y)dμ,similarly ∫_ |x-(y+h)|dμ=∫_-∞^y+h (y+h-x)dμ+∫_y+h^+∞(x-y-h)dμ, so that∫_|x-(y+h)|dμ -∫_ |x-y|dμ=2∫_y^y+h(y-x)dμ+∫_-∞^yhdμ-∫_y^+∞hdμ≤ h^2+∫_-∞^yhdμ -∫_y^+∞hdμ,thus ∂_y M(y)=F_μ(y)-(1-F_μ(y)).Since y<med(μ), we infer that ∂_y M(y)<0.Similarly, it holds ∂_y M(y)>0 if y>med(μ), which concludes the proof.Let q⃗ be such that 0.75 ≥ q_2.Then p⃗=(0.25,0.75) is a feasible vector and it holds ∇𝒲(0.25,0.75)=(0,0).By computing the Hessian of 𝒲 we have thatH𝒲(p⃗)=1/2[3 -1; -13 ],hence the matrix is definite positive and thus (0.25,0.75) is the minimum for this case.Let us now consider the case in which 3q_2-2-q_1≤ 0, in this case we have that ∂_y_1𝒲(p⃗)>0 and ∂_y_2𝒲(p⃗)<0.Thus the gradient cannot be perpendicular to either {(1-q_2,t) s.t. t∈[1-q_2,q_1]} and {(t,q_1) s.t. t∈[1-q_2,q_1]}, hence the minimum is at the vertex (1-q_2,q_1), which concludes the proof for this case.In the remaining cases, we have that the boundary of T(q⃗) intersect the line on which ∂_y_2𝒲=0.In particular, {∂_y_2𝒲=0} intersects with the edge {(1-q_2,t) s.t. t∈[1-q_2,q_1]} at the point (1-q_2,1-q_2/3).On such point, we have that the gradient is perpendicular to the boundary of T(q⃗) and points inward, thus is a relative minimum.To conclude, we show that this is the only possible relative minimum.Indeed, it is easy to see that if the boundary of T(q⃗) intersect the line p_2=3p_1, the gradient points outward due to the sign of ∇𝒲.We then conclude the proof.§ PLACING M FACILITIES AMONG SYMMETRICALLY DISTRIBUTED AGENTS In this Appendix we generalize the procedure described in Section <ref>.First, we consider the case in which m=2 and μ is symmetric.Since μ is symmetric, the asymptotic expected Social Cost of any feasibledoes not depend on π∈𝒮_2.For this reason, in what follows, we fix π=Id and search for the best vector p⃗=(p_1,p_2), with p_1≤ p_2, that induces the optimal ERM.First, we rewrite the objective value of problem (<ref>) in terms of y_i=F_μ^[-1](p_i) for i=1,2.Since F_μ^[-1] is monotone, we have that y_1≤ y_2.We then have𝒲(y_1,y_2)=W_1(μ,F_μ(y_1+y_2/2)δ_y_1+(1-F_μ(y_1+y_2/2))δ_y_2).Since both F_μ and F_μ^[-1] are bijective, any y⃗=(y_1,y_2) identifies a unique p⃗ and vice-versa.Using the properties of the Wasserstein distances on the line <cit.>, we obtain𝒲(y_1,y_2)=∫_-∞^y_1+y_2/2|x-y_1|ρ_μ(x)dx+∫_y_1+y_2/2^+∞|x-y_2|ρ_μ(x)dx. We now show that 𝒲 is differentiable and compute its gradient. Let μ be a symmetric probability measure.Then we have that the function 𝒲 is differentiable and its gradient is defined by the formula∇𝒲(y_1,y_2)=(2F_μ(y_1)-F_μ(y_1+y_2/2),2F_μ(y_2)-1-F_μ(y_1+y_2/2)). Furthermore, 𝒲 is twice differentiable.Lastly, it holds∇𝒲(F_μ^[-1](0.25),F_μ^[-1](0.75))=(0,0). Let q⃗ be a p.c.v., due to the symmetry of μ, without loss of generality, we assume that y_1≤ y_2 are the positions of the facility and that y_1 has capacity q_1 and y_2 has capacity q_2.We then define𝒲(y_1,y_2) =∫_min{|x-y_1|,|x-y_2|}ρ_μ(x)dx=∫_-∞^y_1(y_1-x)ρ_μ(x)dx+∫_y_1^y_1+y_2/2(x-y_1)ρ_μ(x)dx +∫_y_1+y_2/2^y_2(y_2-x)ρ_μ(x)dx+∫_y_2^+∞(x-y_2)ρ_μ(x)dx=A(y_1,y_2)+B(y_1,y_2)+C(y_1,y_2)+D(y_1,y_2),whereA(y_1,y_2) =∫_-∞^y_1(y_1-x)ρ_μ(x)dxB(y_1,y_2) =∫_y_1^y_1+y_2/2(x-y_1)ρ_μ(x)dxC(y_1,y_2) =∫_y_1+y_2/2^y_2(y_2-x)ρ_μ(x)dxD(y_1,y_2) =∫_y_2^+∞(x-y_2)ρ_μ(x)dx.We now compute the derivative of 𝒲 with respect to y_1 and y_2.Let us consider the derivative ∂_y_1𝒲(y_1,y_2).By definition, we have∂_y_1𝒲(y_1,y_2)=∂_y_1A(y_1,y_2)+∂_y_1B(y_1,y_2)+∂_y_1C(y_1,y_2),since ∂_y_1D(y_1,y_2)=0.First, let us consider ∂_y_1A(y_1,y_2).We have that1/h( ∫_-∞^y_1+h(y_1+h-x)ρ_μ(x)dx-∫_-∞^y_1(y_1-x)ρ_μ(x)dx)=1/h(h∫_-∞^y_1+hρ_μ(x)dx-∫_y_1^y_1+h(y_1-x)ρ_μ(x)dx)=∫_-∞^y_1+hρ_μ(x)dx+1/h∫_y_1^y_1+h(y_1-x)ρ_μ(x)dx.By the Lebesgue's theorem, we have that lim_h→ 01/h∫_y_1^y_1+h(y_1-x)ρ_μ(x)dx=0,thus ∂_y_1A(y_1,y_2)=F_μ(y_1).Second, let us compute ∂_y_1B(y_1,y_2).We have that1/h( ∫_y_1+h^y_1+y_2/2+h/2(x-y_1-h)ρ_μ(x)dx-∫_y_1^y_1+y_2/2(x-y_1)ρ_μ(x)dx)1/h(-h∫_y_1+h^y_1+y_2/2+h/2ρ_μ(x)dx+∫_y_1+y_2/2^y_1+y_2/2+h/2(x-y_1)ρ_μ(x)dx -∫_y_1^y_1+h(x-y_1)ρ_μ(x)dx ).Owing again to Lebesgue's theorem, we have∂_y_1B(y_1,y_2)=-(F_μ(y_1+y_2/2)-F_μ(y_1))+(y_2-y_1)/4ρ_μ(y_1+y_2/2).Finally, we compute ∂_y_1C(y_1,y_2).We have that1/h( ∫_y_1+y_2/2+h/2^y_2(y_2-x)ρ_μ(x)dx-∫_y_1+y_2/2^y_2(y_2-x)ρ_μ(x)dx)1/h(-∫_y_1+y_2/2^y_1+y_2/2+h/2(y_2-x)ρ_μ(x)dx ).Owing again to the Lebesgue's theorem, we have that∂_y_1C(y_1,y_2)=-y_2-y_1/4ρ_μ(y_1+y_2/2).Putting all together, we infer∂_y_1𝒲(y_1,y_2)=2F_μ(y_1)-F_μ(y_1+y_2/2).Through a similar argument, we have that∂_y_2𝒲(y_1,y_2)=2F_μ(y_2)-1-F_μ(y_1+y_2/2), which concludes the first part of the proof.Since μ is absolutely continuous, we have that F_μ is differentiable, thus 𝒲 is twice differentiable.Finally, due to the properties of cumulative distribution functions of symmetric measures, we infer that ∇𝒲(F_μ^[-1](0.25),F_μ^[-1](0.75))=(0,0).Now that we have an explicit formula for the gradient of 𝒲, we need to express the set of feasible y⃗.From Theorem <ref>, we have that p_2≤ q_1 and 1-p_2≤ q_2, i.e. 1-q_2≤ p_2.Since F_μ^[-1] is monotone, we have that y_2=F_μ^[-1](p_2)≤ F_μ^[-1](q_1) and that F_μ^[-1](1-q_2)≤ y_1.Therefore we have that the set of feasible y⃗ lays in a triangle, namely T(q⃗), whose vertexes are (F_μ^[-1](1-q_2),F_μ^[-1](q_1)), (F_μ^[-1](q_1),F_μ^[-1](q_1)), and (F_μ^[-1](1-q_2),F_μ^[-1](1-q_2)). Asymmetric probability distributions μ. Finally, we want to stress that the procedure outlined above, it is applicable also for non-symmetric measures as long as the two facilities have the same capacity.Indeed, in this case, the permutation we chose is irrelevant, since the facilities have the same capacity and thus are interchangeable.In this case, we can still compute the derivative of the objective function and retrieve the optimal p⃗ based on how the set T(q⃗) intersects the set of points in which the derivatives of 𝒲 are equal to 0.Of course, in this case, the point (F_μ^[-1](0.25),F_μ^[-1](0.75)) is no longer necessarily a point on which the gradient ∇𝒲 nullifies.§.§ Optimal Extended Ranking Mechanisms in the Generic Framework Lastly, we propose a sufficient and necessary condition that guarantees the existence of an optimal ERM whose limit Bayesian approximation ratio converges to 1 that is applicable regardless of m and μ. Give μ and q⃗, let (π,ν_m) be a solution to Problem (<ref>).We denote with {y_j}_j∈[m] the points in the support of ν_m ordered non-decreasingly, and set y_0=-∞ and y_m+1=+∞.Finally, we define ζ_j=F_μ(y_j+1)-F_μ(y_j-1) and denote with π̂∈𝒮_m a permutation that orders the values ζ_j in a non-increasing order.Then, the optimal ERM has limit Bayesian approximation ratio equal to 1 if and only if ζ_π̂(j)≤ q_j for every j∈ [m]. Let (σ,ν_m) be a solution to (<ref>).Since ν_m∈_m(), we have spt(ν_m)={y_j}_j∈[m].Let us define p_j=F_μ(y_j).Furthermore, let us fix γ=π̂^-1∈𝒮_m.By definition of p_j and by hypothesis, we have that p_j+1-p_j-1=F_μ(y_j+1)-F_μ(y_j-1)≤ q_γ(j) for every j∈[m], hence ^(γ,p⃗) is feasible (Theorem <ref>).Owing to Theorem <ref>, the asymptotic expected Social Cost of ^(γ,p⃗) converges to W_1(μ,ν_p⃗), where ν_p⃗ is a solution to min_λ_j≤ q_γ(j),∑_j∈[m]λ_j=1W_1(μ,∑_j∈[m]λ_jδ_y_j) since, by definition, we have p_j=F_μ(y_j).To conclude, we need to show that (γ,ν_p⃗) is a solution to (<ref>).Due to the optimality of σ∈𝒮_m, it suffices to show that min_ζ∈_σ,m()W_1(μ,ζ)≥ W_1(μ,ν_p⃗). First, let us consider the following minimization problem min_∑_j∈ [m]λ_j=1W_1(μ,∑_j∈[m]λ_jδ_y_j).Notice that the only difference between problem (<ref>) and problem (<ref>) are the constraints λ_j≤ q_σ(j).Let us now consider the probability measure ν=∑_j∈[m](F_μ(z_j)-F_μ(z_j-1))δ_y_j, where z_j=y_j+y_j+1/2 for every j∈[m-1], z_0=-∞, and z_m=+∞.Since the optimal transportation plan between two measures on a line is monotone (see Theorem 2.9, <cit.>), we inferW_1(μ,ν)=∫_×min_j∈[m]|x-y_j| dμ,thus ν is a minimizer of problem (<ref>).Finally, since F_μ is monotone increasing, it holds F_μ(z_j)-F_μ(z_j-1)≤ F_μ(y_j+1)-F_μ(y_j-1)≤ q_γ(j).In particular, ν is feasible for problem (<ref>), thus W_1(μ,ν)=W_1(μ,ν_p⃗).Likewise, ν_m is feasible for problem (<ref>), thus W_1(μ,ν_p⃗)=W_1(μ,ν)≤ W_1(μ,ν_m), which concludes the proof.§ THE BAYESIAN ANALYSIS OF THE INNERPOINT MECHANISM AND THE EXTENDED ENDPOINT MECHANISMIn this appendix, we report the Bayesian analysis of other known mechanisms.In particular, we consider the InnerPoint Mechanism and the Extended Endpoint Mechanism, both introduced in <cit.>.The study of the InnerPoint Mechanisms can be easily generalized to include all the Ranking Mechanisms that place the two facilities at two different points. §.§ The Bayesian approximation ratio of the Innerpoint Mechanism Consider now the 2-CFLP in which we have an even number of agents, thus n=2k, and the two facilities have the same capacity, so that q⃗=(0.5,0.5).In this context, the Innerpoint Mechanism (IM) places the two facilities at y_1 and y_2, where y_1 is the k-th agent report from the left and y_2 the (k+1)-th agent report from the left.Every agent whose report is on the left of y_1 is assigned to the facility at y_1, while the others to y_2. Since the IM works only for instances with an even number of agents, the limit will taken for k→∞.It is easy to see that, for every given k∈ℕ, the IM can be written as a Ranking Mechanism.Indeed, if we set p⃗=(0.5-1/2k,0.5) and π=Id[where Id is the identity permutation, that is π(i)=i for every i∈ [n].], the output of the IM and the output of the ERM associated to p⃗ and π coincide on every instance.However, the vector p⃗ determining the output of the ERM depends on k, thus we cannot directly apply Theorem <ref> to infer the limit Bayesian approximation ratio of the IM.To overcome this issue, we consider the Ranking Mechanism induced by the vector p⃗=(0.5,0.5).[Since the two facilities have the same capacity the mechanism does not depend on the permutation we choose, thus we will omit it for the sake of simplicity.]Indeed, for every k∈𝐍 and x⃗∈^2k, it holds SC_IM(x⃗)≤ SC_RM^(Id,p⃗)(x⃗), hencelim_k→∞ [SC_IM(X⃗)]/ [SC_opt(X⃗)] ≤lim_n→∞ [SC_Id,p⃗(X⃗)]/ [SC_opt(X⃗)]=W_1(μ,δ_med(μ))/W_1(μ,ν_m),where ν_m is a solution to problem (<ref>).Notice the inequality in (<ref>) allows us to conclude that the Bayesian approximation ratio of the IM is finite.We now prove the other inequality.Notice that if k>10 and ϵ∈(0,0.1), it holds true the following SC_IM(x⃗)≥ SC_^(Id,p⃗_ϵ)(x⃗),where p⃗_ϵ=(0.5-ϵ,0.5).Notice that the mechanism ^(Id,p⃗_ϵ) is not feasible for q⃗=(0.5,0.5), but it is feasible if we consider q⃗_ϵ=(0.5+ϵ,0.5+ϵ).Indeed, for q⃗_ϵ the routine of the IM is still well-defined and the cost of the output of IM does not depend on which q⃗ we consider, thus the inequality holds.If we take the average and the limits for k→∞ on both sides, we infer from Theorem <ref> thatlim_k→∞ [SC_IM(X⃗)]/ [SC_opt(X⃗)] ≥lim_n→∞ [SC_Id,p⃗_ϵ(X⃗)]/ [SC_opt(X⃗)]=W_1(μ,η_ϵ)/W_1(μ,ν_m),where η_ϵ=λ_ϵδ_F^[-1]_μ(0.5-ϵ)+(1-λ_ϵ)δ_med(μ) andλ_ϵ=F_μ(F^[-1]_μ(0.5-ϵ)+F^[-1]_μ(0.5)/2). Since the measure η_ϵ converges in probability to δ_med(μ) when ϵ→ 0, we infer thatlim_k→∞ [SC_IM(X⃗)]/ [SC_opt(X⃗)]≥W_1(μ,δ_med(μ))/W_1(μ,ν_m),hence the Bayesian approximation ratio of the IM is W_1(μ,δ_med(μ))/W_1(μ,ν_m), where ν_m is the solution to problem (<ref>).By a similar argument, we can show that also the InnerChoice Mechanism proposed in <cit.> and any truthful Ranking Mechanism have bounded limit Bayesian approximation ratio.In particular, the InnerChoice Mechanism has the same limit Bayesian approximation ratio of the IM.§.§ The Extended Endpoint MechanismGiven a p.c.v. q⃗ , the Extended Endpoint Mechanism (EEM) is a mechanism proposed in <cit.> feasible to handle any 2-CFLP.In our formalism, the routine of the EEM is as follows.Let x⃗ be a vector containing the agents' report.Without loss of generality, we assume that x⃗ is ordered non decreasingly, i.e. x_i≤ x_i+1.We define A_1={x_i s.t.|x_i-x_1|≤1/2|x_1-x_n|} and A_2={x_i s.t.x_i∉ A_1}.If |A_1|≥ |A_2|, the EEM determines the position of two facilities as follows: * If |A_1|≤q_1(n-1)+1 and |A_2|≤q_2(n-1)+1, we set y_1=x_1 and y_2=x_n, we place the facility with capacity q_1 at y_1 and the facility with capacity q_2 at y_2.* If |A_1|> q_1(n-1)+1 and |A_2|≤q_2(n-1)+1, we set y_1=2x_q_1(n-1)+2-x_n and y_2=x_n, we place the facility with capacity q_1 at y_1 and the facility with capacity q_2 at y_2.* If |A_1|≤q_1(n-1)+1 and |A_2|> q_2(n-1)+1, we set y_1=x_1 and y_2=2x_n-(q_2(n-1)+1)-x_1, we place the facility with capacity q_1 at y_1 and the facility with capacity q_2 at y_2. In all of the cases above, every agent is assigned to the facility that is closer to the position (break ties arbitrarily withoutoverloading any facility).Finally, if |A_1|> |A_2|, it suffices to switch the roles of the two facilities in the cases described above.We now study the limit Bayesian approximation ratio of the EEM when the agents are distributed according to a uniform distribution and q⃗=(0.8,0.4).For the sake of simplicity, we consider only a odd number of agents n=2k+1.First, we notice that we can restrict our attention to the class of instances in which |A_1|>|A_2|.Indeed, due to the symmetry of the uniform distribution, we have that the every instance for which |A_1|>|A_2| holds, uniquely identifies an instance for which it holds |A_1|<|A_2|[Since n is odd we have that we cannot have |A_1|=|A_2|.].Let us consider the expected Social Cost of the instances for which it holds |A_1|>|A_2|, which we denote with 𝒜.Using the properties of the expected values, we can compute the expected Social Cost of the EEM as the sum of three conditioned expected values[SC_π,p⃗(X⃗)] =[SC_π,p⃗(X⃗)|X⃗∈ C_1]P(X⃗∈ C_1) +[SC_π,p⃗(X⃗)|X⃗∈ C_2]P(X⃗∈ C_2) +[SC_π,p⃗(X⃗)|X⃗∈ C_3]P(X⃗∈ C_3),where [label=(*)]* C_1 contains all the instances for which |A_1|≤q_1(n-1)+1 and |A_2|≤q_2(n-1)+1;* C_2 contains all the instances for which |A_1|> q_1(n-1)+1 and |A_2|≤q_2(n-1)+1; and* C_3 contains all the instances for which |A_1|≤q_1(n-1)+1 and |A_2|>q_2(n-1)+1.Notice that 𝒜=C_1∪ C_2 ∪ C_3.Moreover, every C_i is disjoint from the other, hence, since we are restricting our attention to only the instances in 𝒜, we have P(X⃗∈ C_3)=1-P(X⃗∈ C_1)-P(X⃗∈ C_2). We first show that both P(X⃗∈ C_1) and P(X⃗∈ C_2) go to 0 as the number of agents go to infinity and then show that lim_n→∞[SC_π,p⃗(X⃗)|X⃗∈ C_3]=W_1(μ,(1-q_2)δ_0+q_2δ_1.2). First, let us consider P(X⃗∈ C_1).Notice that P(X⃗∈ C_1) =P(|A_1|≤q_1(n-1)+1,|A_2|≤q_2(n-1)+1)≤ P(|A_2|≤q_2(n-1)+1)≤ P(X_0.55(n-1)+1≤X_1+X_n/2),where the last inequality comes from the fact that if|A_2|≤q_2(n-1)+1then it must be that X_0.55(n-1)+1≤X_1+X_n/2, since q_2=0.4.Let us now set ϵ>0 and r=0.55(n-1)+1, then we haveP (X⃗∈ C_1)≤ P(X_r≤X_1+X_n/2)=P(X_r≤X_1+X_n/2|X_1≤ϵ, X_n≥ 1-ϵ)P(X_1≤ϵ, X_n≥ 1-ϵ) +P(X_r≤X_1+X_n/2|X_1≥ϵ, X_n≤ 1-ϵ)P(X_1≥ϵ, X_n≤ 1-ϵ)≤ P(X_r≤1+ϵ/2)(P(X_1≤ϵ)+P( X_n≥ 1-ϵ)) + P(X_1≥ϵ)+P( X_n≤ 1-ϵ),since P(X_r≤X_1+X_n/2|X_1≥ϵ, X_n≤ 1-ϵ)≤ 1.Notice that the cumulative distribution function of the n-th order statistic of a uniform ditributed random variable ist→(F_μ(t))^n=t^nhence P( X_n≤ 1-ϵ)=(1-ϵ)^nhence lim_n→∞P( X_n≤ 1-ϵ)=0.Likewise, we have that lim_n→∞ P(X_1≥ϵ)=0. Lastly, we need to show that P(X_r≤1+ϵ/2) goes to 0 as n goes to infinity.It is well-known that, for a suitable constant K, it holds K√(n)(X_0.55(n-1)+1-0.55) converges in distribution to the standard Gaussian Z∼𝒩(0,1).Given ϵ<0.05, we have thatP(X_r ≤1+ϵ/2)=P(K√(n)(X_0.55(n-1)+1-0.55)≤ K√(n)(1+ϵ/2-0.55)).ThusP(X_r≤1+ϵ/2) ≤|P(X_r≤1+ϵ/2))-P(Z≤ K√(n)(1+ϵ/2-0.55))| + P(Z≤ K√(n)(1+ϵ/2-0.55)).Since (1+ϵ/2-0.55)<0, we have thatlim_n→∞P(Z≤ K√(n)(1+ϵ/2-0.55))=0.Finally, by the convergence in distribution of X_r,see <cit.>, we havelim_n→∞|P(X_r≤1+ϵ/2))-P(Z≤ K√(n)(1+ϵ/2-0.55))|=0,which allows us to conclude that lim_n→∞P(X⃗∈ C_1)=0. Similarly, we have that lim_n→∞P(X⃗∈ C_2)=0.We thus infer lim_n→∞P(X⃗∈ C_3)=1. Lastly, we need to show that lim_n→∞[SC_π,p⃗(X⃗)|X⃗∈ C_3]=W_1(μ,(1-q_2)δ_0+q_2δ_1.2). Notice that if x⃗∈ C_3, the EEM places a facility at x_1 and the other one at 2x_n-(q_2(n-1)+2)-x_1, the first n-(q_2(n-1)+1) agents are assigned to the facility placed at x_1, while the others to the facility at 2x_n-q_2(n-1)+2-x_1.Since every agent is assigned to its closest facility, we can adapt the argument used to prove Lemma <ref>, and infer thatSC_EEM(x⃗)=W_1(μ_n,q_1δ_x_1+(1-q_1)δ_2x_n-q_2(n-1)+2-x_1). By the same argument used in the proof of Theorem <ref>, it is possible to show that lim_n→∞W_1(μ_n,q_1δ_x_1 +(1-q_1)δ_2x_q_1(n-1)+2-x_1)=W_1(μ,(1-q_2)δ_0+q_2δ_1.2),which concludes the proof.It is important to emphasize that the examination of the limit Bayesian approximation ratio of the EEM is feasible due to the compact support of the uniform distribution.Indeed, since every output of the EEM places one of the facilities either at x_1 or x_n, a rigorous investigation into the limit study is only possible if we can ensure the convergence of at least one between the first and last order statistic.In particular, the study is impossible for measures with unbounded support.Consequently, any meaningful analytical investigation into the limit Bayesian approximation ratio of the EEM necessitates measures with at least one side of their support being bounded. § ADDITIONAL EXPERIMENTAL RESULTSIn this Appendix, we report the values plotted in Figure <ref>, <ref>, and <ref>.In particular, Table <ref> reports the Bayesian approximation ratios for the balanced case, plotted in Figure <ref>.Table <ref> reports the values of the Bayeisan approximation ratios for the unbalanced case, plotted in Figure <ref>.Finally, in Table <ref>, we report the values of the relative error of the Bayesian approximation ratios, plotted in Figure <ref>. | http://arxiv.org/abs/2312.16034v1 | {
"authors": [
"Gennaro Auricchio",
"Jie Zhang",
"Mengxiao Zhang"
],
"categories": [
"cs.GT",
"91B03 90B06"
],
"primary_category": "cs.GT",
"published": "20231226125259",
"title": "Extended Ranking Mechanisms for the m-Capacitated Facility Location Problem in Bayesian Mechanism Design"
} |
Article Title]Knowledge Enhanced Conditional Imputation for Healthcare Time-series[1]Linglong [email protected][1]Zina [email protected],6]Hugh Logan [email protected] 5]Ao [email protected] 1]Yuezhou [email protected] 1]Tao [email protected] 1,2]Richard JB [email protected]*[1]Department of Biostatistics and Health Informatics, King’s College London, London, UK [2]South London and Maudsley NHS Foundation Trust, United Kingdom [3]University College London, London, UK [4]Health Data Research UK, University College London, London, UK [5]Wellcome Centre for Human Genetics, Nuffield Department of Medicine, University of Oxford, Oxford, UK [6]Department of Medicine, King’s College Hospital NHS Foundation Trust, London, UK This study presents a novel approach to addressing the challenge of missing data in multivariate time series, with a particular focus on the complexities of healthcare data. Our Conditional Self-Attention Imputation (CSAI) model, grounded in a Transformer-based framework, introduces a conditional hidden state initialization tailored to the intricacies of medical time series data. This methodology diverges from traditional imputation techniques by specifically targeting the imbalance in missing data distribution, a crucial aspect often overlooked in healthcare datasets.By integrating advanced knowledge embedding and a non-uniform masking strategy, CSAI adeptly adjusts to the distinct patterns of missing data in Electronic Health Records (EHRs). This strategic approach not only boosts imputation accuracy but also aligns with the inherent temporal dynamics and feature interdependencies typical in clinical data. Our model's unique ability to capture subtle temporal relationships significantly improves data restoration efficiency.Extensive experimental evaluations demonstrate that CSAI outperforms existing methods, particularly in managing the complexities of missing data in healthcare time series. Furthermore, we have redefined the time series interpolation task, ensuring greater alignment with clinical demands. In our commitment to collaborative advancement, the code for CSAI will be made available, facilitating further research and application in the field of healthcare analytics. The code will be made available at https://github.com/LINGLONGQIAN/CSAI.[ [=====§ INTRODUCTION Multivariate time-series data, particularly from Electronic Health Records (EHRs), play a crucial role in predictive healthcare analytics <cit.> and a plethora of models have been designed to produce patient-specific prognoses from EHR data <cit.>. However, EHR time-series are characterised by their complexity, and designing successful downstream predictive models requires tackling the abundance of non-random missingness of correlated variables recorded over time. Over 50% of EHR data is missing not at random, which is a result of the natural temporal irregularities in data acquisition and documentation according to healthcare and administrative decisions <cit.>, the frequency and gaps between the recording of vital signs widely vary. For instance, while vital signs like heart rate are regularly monitored, data on white blood cell counts are often inconsistently recorded, reflecting the variable nature of healthcare decision-making. This irregularity in data collection creates a complex landscape for imputation algorithms <cit.>. The above challenges are further compounded by the fact that the healthcare manifestations of the same disease can vary substantially, creating additional diversity in the data <cit.> and that healthcare variables and their missingness patterns tend to correlate over time. For instance, because hypertension is a known cause of kidney disease, EHR datasets generally show high correlations between the recording patterns of blood pressure and urine creatinine levels <cit.>. Retrospective studies have shown significant variation in missingness patterns across tasks, variables, and time in multi-centre medical data <cit.>. Traditional statistical and machine-learning imputation methods, those which make strong assumptions about the data distributions, are therefore inadequate given the complexity of the task<cit.>. Recent advancements in deep learning have shown promise in addressing these challenges, offering more sophisticated approaches to impute missing data in time-series <cit.>, while the integration of domain knowledge into AI models is not just beneficial—it is essential.It is crucial to remember that EHR data are primarily collected for healthcare care and administrative purposes <cit.>, not specifically for research. These experts bring a deep understanding of care practices, patient data intricacies, and the nuances of data processing, which are invaluable for insightful and accurate analysis. AI models enhanced with this level of domain expertise are not only more adept at interpreting complex medical data but also align more closely with the practicalities and subtleties of patient care.In this work, we introduce an innovative approach to healthcare time-series missing Not At Random (MNAR) data imputation, extending the capabilities of traditional Transformer-based models. Building upon the foundation laid by the BRITS <cit.> architecture, our methodology integrates a novel conditional hidden state initialization mechanism with non-uniform masking strategy. This advancement is particularly focused on addressing the intricacies of temporal dynamics and feature interdependencies in time-series data.§ RELATED WORK Efforts to impute multivariate time-series data have resulted in numerous strategies where we mainly focus on highly performing deep learning models. Among those, the GRUD model <cit.> incorporates temporal decay for missing data imputation, and its extensions, MRNN <cit.> and BRITS <cit.>, capture temporal dynamics and missingness patterns across multiple features utilizing bidirectional RNNs. However, the MRNN model's limitation lies in treating imputed values as constants without sufficient updates during iterations. In contrast, BRITS, free from specific data assumptions, has exhibited superior performance across domains.Stochasticity has been introduced in RNN models in GRUU <cit.>, variational autoencoders in V-RIN <cit.> and generative adversarial networks in E^2GAN <cit.>. However, all existing approaches require additional uncertainty modules added to the imputing network, which translates to unstable training due to increased coupling and leads to less accurate imputation<cit.>. CSDI <cit.> utilizes score-based diffusion models conditioned on observed data, explicitly trained for imputation, and capable of exploiting correlations between observed values. However, a notable concern with CSDI, as indicated in their repository, is data leakage issues similar to those in BRITS, potentially influencing the information inferred for test data.Transformers, initially gaining prominence in natural language processing and computer vision, have been adapted for time-series analysis with notable success <cit.>. The TransformerEncoder, a pivotal element of these architectures, excels in capturing long-range dependencies and complex temporal patterns, thanks to its self-attention mechanism. This mechanism dynamically assesses the significance of different points in a time-series, enabling a detailed understanding of temporal relationships. SAITS <cit.>, focusing on the MCAR (Missing Completely at Random) case, explores the capabilities of self-attention mechanisms in handling time-series imputation.However, despite their adaptability to sequential data and effectiveness in long-range dependency modeling, Transformers face challenges specific to time-series data. A critical limitation is their inherent permutation invariance due to the self-attention mechanism, which can result in the loss of crucial temporal information. Models like NRTSI <cit.> have been argued to compromise parallel computational efficiency through nested loops <cit.>. Addressing this issue typically involves integrating positional encodings, yet there is ongoing debate about their effectiveness in capturing fine-grained temporal nuances, particularly in scenarios with complex missing patterns and inter-feature relationships. RDIS <cit.> employs ensemble learning and multiple masking strategies to boost performance, which is the same idea as M^3-BRITS <cit.>. Despite these advancements, a significant gap remains in integrating domain-specific knowledge, particularly from healthcare, which is crucial for interpreting and leveraging the temporal and relational intricacies of healthcare data. A critical aspect often overlooked in these models is the integration of domain knowledge, especially in healthcare. As highlighted by recent literature, EHR data are primarily collected for healthcare care and administrative functions, not explicitly for research. This nature of EHR data necessitates the involvement of domain experts who understand the intricacies of care practices and the subtleties of healthcare data processing <cit.>. Incorporating this knowledge into AI models is crucial for accurately interpreting and managing the unique challenges of healthcare data, such as disparate missingness distributions.§ TERMINOLOGY AND BACKGROUND§.§ Incomplete Multivariate time-series representationFor a temporal interval observed over T discrete time-steps, we represent a multivariate time-series as a matrix X={x_1, x_2,..., x_T}, composed of T observations. Each observation, denoted by x_t∈ℝ^1 × D, is a vector of D features. It is crucial to note that ℝ^D is heterogeneous, encompassing structured data types that extend beyond purely numerical features. This configuration allows for a comprehensive representation of the diverse data elements inherent in complex multivariate time-series.Information related to missing values is encapsulated within two derived matrices (see Fig. <ref>). The mask matrix M∈ℝ^T× D indicates whether each element of X is observed or missing:m_t^d =0,if x_t^dis missing1, otherwise Additionally, given that the time elapsed between consecutive observations can vary across the interval, we denote the time gaps at each time step s_t as δ_t. Given the potential for non-uniform sampling across features in the data X, there is a corresponding variability in δ_t. The δ ^d∈ℝ^T × D encodes the time gap between two successive observed values for each feature d, providing an additional indicator of temporal context to the dataset. The definition of this indicator follows:δ_t^d = s_t - s_t-1 + δ_t-1^d if t > 1, m_t^d = 0 s_t - s_t-1 if t > 1,m_t^d =1 0if t = 1§.§ Task definition and Implementation BiasA large amount of existing work is carried out according to the same model or even the same implementation method, leading to the accumulation of errors or comparing performance in an unfair way, which encourages us to re-formulate tasks and point out potential risks and correct them.§.§.§ Data Leakage RiskData normalization before dataset splitting, a common practice in models like BRITS <cit.> and CSDI <cit.>, poses a significant risk of data leakage. This process often involves normalizing the entire dataset before dividing it into training, validation, and test subsets, leading to the inadvertent inclusion of validation/test set distribution information in the training dataset. Such normalization practices may result in misleadingly high-performance metrics, necessitating a reevaluation of data preprocessing methods. The entire dataset X is normalized as follows before splitting into training, validation, and test sets:X_norm = X - μ(X)/σ(X)Where μ(X) and σ(X) are the mean and standard deviation of the dataset X, respectively. This process leads to data leakage as the distribution characteristics of the entire dataset are used.To prevent data leakage, normalization should ideally be performed after splitting the dataset into training X_Tr, validation X_Va, and test X_Te sets: To mitigate data leakage risk and ensure more consistent model training, the validation and test sets should ideally be normalized using the mean and standard deviation of the training set. This approach promotes smoother training by maintaining uniform data distribution, although normalization using each set's own distribution is also viable.§.§.§ Task Risk The definition of learning tasks in time-series imputation, as exemplified by SAITS <cit.> with its MIT and ORT, can be overly complex. Simplifying task definitions to focus on self-supervised training with diverse artificial masks could enhance model clarity. Different masking strategies in the training set, as seen in models like BRITS and CSDI, influence the amount and quality of observation data available for learning. A clear, uniform approach to task definition is necessary for fair comparison and effective model training.§.§.§ Inadequate Masking Risk Manual masking, a crucial step in time-series imputation models, often leads to discrepancies in mask probability. The conversion from mask probability to the number of masks is not always equivalent, causing the actual mask probability to be lower than expected. This results in an excess of observations in training, potentially diminishing the model's actual performance compared to its reported effectiveness under a given missing rate. §.§ Overview of the BRITS BackboneBRITS <cit.> architecture, combining a fully-connected regression module and a recurrent component, applies temporal decay (Eq. (<ref>)) and a decay factor to handle temporal correlations and adjust influence based on temporal distance. Missing values within an observation x_t are managed via a historical representation x̂_t and a masking vector m_t, producing a complement vector x_t^hc that accounts for missingness patterns (Eq. (<ref>)-(<ref>)).h_0 = 0γ_th = exp(-max(0,W_γ hδ_t + b_γ h)) ĥ_t-1 = h_t-1⊙γ_th x̂_t = W_xĥ_t-1 + b_x x_t^hc = m_t ⊙x_t + (1-m_t) ⊙x̂_t BRITS explores intra-observation correlations through a fully-connected layer, generating x_t^fc, a feature-wise approximation of missing values (Eq. (<ref>)). The concept of decay extends to feature space, resulting in a learnable factor, β̂t, considering both temporal decay and the masking vector (Eq. (<ref>)-(<ref>)). This integration produces the imputed matrix C_t, effectively combining observed and imputed data (Eq. (<ref>)-(<ref>)).x_t^fc = W_z x_t^hc+b_z γ_tf = exp(-max(0,W_γ fδ_t + b_γ f)) β̂_t = σ(W_β [γ_tf∘m_t] +b_β)x_t^c = β_t ⊙x_t^fc + (1-β_t) ⊙x_t^hc C_t = m_t ⊙x_t + (1-m_t) ⊙x_t^c h_t = σ(W_t ĥ_t-1 + U_h [C_t ∘m_t]+ b_h) The final step (Eq. (<ref>)) updates the hidden state via Recurrent Neural Networks (RNNs), leveraging various indicators to learn functions of past observations. The bidirectional recurrent dynamics approach integrates backward information to tackle slow convergence. In essence, BRITS exploits temporal and feature correlations in multivariate time-series data, employing decay factors, a regression module, and a bidirectional RNN for imputing missing values. The final hidden states are updated using imputations and corresponding masks, with the integrated processes visualized in Figure <ref>. § METHODOLOGY In this section, we introduce a novel approach: Transformer-based Conditional Hidden State Initialization, along with Non-Uniform Masking. This method is an extension and enhancement of the BRITS architecture, tailored to address specific challenges in healthcare time-series data imputation, as illustrated in <ref>. §.§ Non-Uniform Masking Strategy Our non-uniform masking strategy fundamentally diverges from traditional approaches by leveraging the inherent missing distribution characteristic of each feature. This strategy is predicated on the principle that the probability of missingness in healthcare data is not uniformly distributed across all features. Instead, it varies based on specific healthcare parameters and patient conditions. By incorporating this variability into our masking process, we aim to create a more realistic and representative model of the missing data patterns encountered in healthcare settings.The core of our algorithm involves generating non-uniform masking probabilities, guided by regulatory factors that account for the unique missing distribution of each feature. Another essential aspect is the manual adjustment factor, designed to fine-tune the masking process. This factor is adaptively set based on the observation frequency per feature, allowing for a tailored masking strategy. This adaptive approach strikes a balance between avoiding overfitting in sparse data scenarios and ensuring sufficient learning signals in data-rich contexts. It is designed to optimize the model's ability to discern underlying patterns and relationships, enhancing its predictive accuracy and generalization across diverse healthcare data patterns. For a given feature d, the non-uniform masking probability P_nu(d) is determined as follows:Q_mask(d|U,I)= F(d, U, I) P_nu(d)= Q_mask(f) ×P_dist(d)Where: * U and I are the pre-defined parameters.* Q_mask(d|U,I) is the regulatory factor for feature d, conditioned by U and I.* P_dist(d) represents the probability distribution of missingness for feature d. The overall masking proportion is then adjusted to ensure consistency with the uniform masking rate U, while retaining the non-uniform characteristics of the individual features. The pseudocode for this algorithm is as follows: §.§ Conditional Knowledge EmbeddingIn healthcare practice, much like the observer effect in physics, the process of recording medical data can have a significant impact on the patient's condition, especially in the case of invasive tests. This parallel underscores the critical nature of timing in healthcare measurements, where the relevance of historical observations is often contingent on the elapsed time since their occurrence. The last observation of a patient's status and the median time gap between these observations are pivotal in accurately understanding the patient's current condition. Careful timing, akin to precise measurement in physics, is essential to minimize the impact on the patient and to ensure the accuracy of the data collected.In response to this healthcare necessity, we introduce a decay attention mechanism that dynamically adjusts the attention weight based on the time gap between observations. This innovative approach, informed by healthcare domain knowledge, recognizes the crucial role of temporal proximity in the diagnostic value of data points. By prioritizing recent observations and accounting for the natural variability in healthcare data collection, our decay attention method aligns with the realities of healthcare practice, enhancing the predictive capabilities of time-series analysis in healthcare settings.Decay Function Formulation Different with the BRITS architecture, our decay attention function A(δ_t) introduces a nuanced approach. It is used to represent the attention weight assigned to an observation at time t. This function diminishes as the time gap δ_t. The function is expressed as:A(δ_t) = exp(-α(δ_t))where α denotes a decay rate parameter or a learnable neural network that modulates how rapidly the attention weight decreases with time.This decay attention mechanism effectively encapsulates the healthcare intuition that more recent observations carry greater diagnostic value. By embedding this temporal knowledge into the neural network, the model becomes more aligned with healthcare realities, thereby enhancing its predictive capabilities in healthcare applications.Integration of Median Time Gap To enhance the model's healthcare applicability, we have integrated the median time gap τ for a specific healthcare feature. This integration leads to a tailored adjustment, giving more weight to observations closer to the median time gap and less to those further away. This adjusted function is formulated as:A_adjusted(δ_t, τ) = exp(-α (δ_t - τ))This ensures that the attention peaks when the time gap δ_t closely approximates the median τ, and declines as the discrepancy between δ_t and τ widens.Challenges in Implementation Implementing the decay attention model within a neural network framework presents significant challenges, particularly in maintaining consistency between the signs of the input and output values following nonlinear transformations. Avoiding direct application of absolute value functions on the output is critical, as such operations can interfere with the network’s training dynamics by disrupting the gradient flow. This, in turn, could lead to unstable or ineffective learning processes during model training.§.§.§ Transformer-based hidden state initialization In the realm of time-series data analysis, BRITS (Bidirectional Recurrent Imputation for time-series) represents a significant advancement, leveraging recurrent neural networks to adeptly handle missing values in multivariate time-series. Its architecture, characterized by bidirectional dynamics and a novel application of decay factors, addresses the challenge of temporal correlations with a level of sophistication unmatched by traditional methods.However, despite its robustness, BRITS has certain limitations in capturing intricate temporal and feature correlations, particularly in highly dynamic environments. Our research extends the BRITS framework, focusing on conditional hidden state initialization. This enhancement allows for a more nuanced understanding of temporal dynamics, offering improved performance in scenarios where the temporal relationship between data points is especially critical. That is acutely aware of the underlying temporal patterns in the data, which is particularly relevant in fields like healthcare, where the timing of events can be as crucial as the events themselves.Conditional Self-Attention Hidden State Initialization The cornerstone of our method lies in the initialization of hidden states. Unlike BRITS, which initializes hidden states with zeros, our approach leverages the last observed data point and a decay attention mechanism to generate the proper conditional hidden state distribution q(h_init|x_last_obs, A_adjusted(δ_t, τ)) with the model distribution p_θ(x_t). This strategy is designed to provide a more contextually rich starting point for the model, thus enhancing the effectiveness of subsequent imputations.In contrast to the general decay in BRITS, where the decay factor is applied directly to the previous hidden state <ref> and <ref>, our approach uses the decay factor to modulate the attention mechanism. This modification allows for a more fine-grained and feature-specific understanding of temporal relationships within the data, enhancing the model's ability to adapt to varying temporal dynamics across different healthcare features.Input Projection and Positional Encoding At each time step s_t, the last observation x_last_obs∈ℝ^N × T × D_feature from input time-series data x_t undergo transformation alone with A_adjusted(δ_t, τ) ∈ℝ^N × T × D_feature from <ref>. The Positional Encoding module and an Input Projection module are used to transform them into ℝ^N × T × D_model, defined as:x^'_last_obs = PosEncoder(InputProj(x_last_obs))A^'_adjusted(δ_t, τ)= PosEncoder(InputProj(A_adjusted(δ_t, τ)) Transformer enhancement The concatenated input for the Transformer Encoder C_in is formed by combining transformed last observation and decay attention and processed through multiheaded self-attention (MSA), layernorm (LN), and feed-forward networks (FFN).C_in = Concat(x^'_last_obs, A^'_adjusted(δ_t, τ))C_out = LN(FFN(LN(MSA(C_in)))) Information Scaling to initialize Hidden State The enhanced output of Transformer Encoder is scaled to the dimensions required by the hidden state h_init through alternating 1D convolution. H_1 = Conv1_1(C_out * W_1 + b_1)h_init = Conv1d_2(H_1 * W_2 + b_2)where <ref> transform C_out from ℝ^N × L × d_model to ℝ^N × L × d_hidden and produces H_1, <ref> further scale H_1 to ℝ^N × 1 × d_hidden to generate the initialized Hidden State h_init.§ EXPERIMENTS For the sake of the reproducibility of our results, we make our work publicly available to the community. Our data preprocessing scripts, model implementations, and hyperparameter search configurations are all available in the GitHub repository.In this study, we rigorously evaluate the performance of our Conditional Self-Attention Imputation (CSAI) model against a selection of state-of-the-art baselines in the context of four real-world healthcare datasets. Our comparison includes an array of established models: BRITS, BRITS-GRU, GRUD, V-RIN(full), and MRNN. It's important to note that we specifically include BRITS-GRU as a baseline due to the original BRITS model utilizing LSTM cells, while other baselines employ GRU cells. This addition ensures a fair and comprehensive comparison across similar architectures.For each experimental setup, we select only the best-performing model from each baseline study for a more focused and meaningful comparison. Although a comparison with E^2GAN would have provided additional insights, especially since it did not quantitatively compare it with BRITS. The publicly available implementation of E^2GAN is unfortunately incompatible with our current setup, precluded the possibility of directly comparing E^2GAN with other models in our evaluation. §.§ DatasetsEach of the four datasets chosen for experimental evaluation has a different data distribution, especially the MIMIC-III database, from which we extracted two different datasets to model different tasks. We reproduced the benchmarking papers for these public datasets, skipping steps that remove all-NAN samples to retain the data with its original missingness. * eICU <cit.> is sampled from the eICU Collaborative Research Database, a multi-center database with anonymised hospital patient records. eICU is publicly available after registration, including completion of a training course in research with human subjects and signing of a responsible data use agreement [https://physionet.org/content/eicu-crd/2.0/]. We followed the only benchmarking extract available for eICU.* MIMIC-III[https://physionet.org/content/mimiciii/1.4/] <cit.> is an extensive, freely-available database of over 40,000 critical care patients. To complement our MIMIC experiments with heterogeneous feature types and data dimensionality, we followed two benchmark papers <cit.> and extracted different datasets for the experiment. 14,188 samples for 89 variables; 21,128 valid samples with 59 variables. * PhysioNet Challenge 2012 dataset. The Predicting Mortality of ICU Patients: The PhysioNet/Computing in Cardiology Challenge 2012, a public medical benchmarking dataset provided by <cit.>, contains records of 4000 48-hour ICU stays, allowing unbiased evaluations of different model performances. 3,997 samples with 35 variables. §.§ Implementation DetailsWe trained our models on an HPC node with NVIDIA A100 40GB running Ubuntu 20.04.6 LTS (Focal Fossa). We used Python: 3.8.16. All package details can be found in our repository. For each of the five datasets, we created three versions by further masking 5%, 10% and 20% cells in addition to the missingness already present in the original datasets. These masked cells have known ground truths and will form the basis of our comparison of imputation performance. We used the Adam optimizer for training and set the number of RNN hidden units to 108 for all models. The batch size is 64 for PhysioNet and traffic data and 128 for the other datasets.To promote stable training, each dataset was normalized to have zero mean and unit variance. Randomly, we selected 10% of each dataset for validation and another 10% for testing, training the models on the remaining data. For the imputation task, we randomly masked 10% of observations in each dataset to serve as the ground truth, which was used as validation data. A 5-fold cross-validation method was implemented to evaluate the models. Imputation performance was gauged using the Mean Absolute Error (MAE) and Mean Relative Error (MRE). As for the non-uniform masking strategy, we conducted a detailed analysis. For fair analysis with other works, we only adopt this strategy for the training set in all datasets. §.§ Experimental Results The consolidated analysis of both imputation and classification tasks, as reflected in the experimental results, underscores the strengths and areas for improvement of the Conditional Self-Attention Imputation (CSAI) model across various healthcare datasets.The performance of our Conditional Self-Attention Imputation model (CSAI) across different datasets and masking ratios demonstrates notable efficacy in the imputation task. As indicated in Table <ref>, CSAI consistently achieved the lowest Mean Absolute Error (MAE), outperforming traditional models like BRITS, BRITS_GRU, and others. This is particularly evident at lower masking ratios on both the eicu and mimic_59f datasets, suggesting that CSAI effectively leverages limited data to accurately impute missing values. This strength underscores the model's capability to integrate domain-specific knowledge into its architecture, adapting to the heterogeneity of medical time series data.Table <ref> reflects CSAI's robust classification capabilities, with competitive Area under the ROC Curve (AUC) scores across various masking ratios. It demonstrates the Conditional Self-Attention Imputation (CSAI) model's overall effectiveness, with notable performance in the eICU, PhysioNet, and MIMIC_59f datasets. This superior performance, especially notable in the PhysioNe dataset, CSAI's effectiveness in handling the complex task of predicting outcomes based on imputed data. However, in the MIMIC_89f dataset, CSAI is slightly underperformed compared to BRITS-GRU. It not only indicates the robustness of the BRITS architecture and underscores the strength of the BRITS framework; but also validates its selection as the foundational structure for the CSAI model. The slightly lower performance in the MIMIC_89f dataset could be due to various factors, such as the inherent complexity of the dataset or specific nuances in missing data patterns that might require additional model tuning.Our comprehensive experimental evaluation of the Conditional Self-Attention Imputation (CSAI) model across diverse healthcare datasets demonstrates its effectiveness in addressing both imputation and classification tasks. CSAI's adaptability to the complexities inherent in healthcare data, evidenced by its consistent performance across varying masking ratios, highlights its resilience to data sparsity. This is a significant advantage in real-world healthcare scenarios where incomplete data is a common challenge. The integration of domain-specific knowledge and the implementation of sophisticated imputation strategies in CSAI significantly contribute to its robustness, emphasizing the necessity of tailored approaches in the realm of healthcare analytics.Despite the overall strong performance of CSAI, there are areas where CSAI's performance indicates room for improvement. In the MIMIC_89f dataset, CSAI slightly lags behind BRITS-GRU, suggesting that further model refinement is needed. This disparity opens up avenues for future research, particularly in enhancing CSAI's domain knowledge integration mechanisms and exploring the potential benefits of additional training data. Such refinements could improve CSAI's robustness and effectiveness across various healthcare datasets, each with its unique characteristics and challenges. §.§ Ablation Study §.§.§ Non-uniform masking strategy ComparisonIn this ablation study, we examine the efficacy of our proposed non-uniform masking strategy across different data partitions: training, validation, and test sets. To ensure a fair comparison, we implemented the same validation and test settings as in the existing literature, alongside our non-uniform masking variant. The different permutations examined include:* Train only: Non-uniform masking applied solely to the training set.* All: Non-uniform masking applied across all data sets.* None: A baseline with no non-uniform masking.* Val only: Non-uniform masking applied only to the validation set.* Val Test: Non-uniform masking applied to both the validation and test sets.* Test only: Non-uniform masking applied exclusively to the test set. We utilized the BRITS and BRITS-GRU models for this comparison. The "All" configuration emerged as the top performer, suggesting that our non-uniform masking strategy, when applied uniformly across all data partitions, optimally leverages the data's distribution to improve learning.Our findings reveal that the "All" permutation of non-uniform masking yields the best performance. This indicates that a consistent application of our masking strategy across training, validation, and test sets can effectively adjust the proportion of different features, accommodating their varied learning difficulties and improving the model's overall ability to handle the missingness inherent in medical time-series data.Table <ref> summarizes the performance metrics of each permutation, demonstrating the comparative improvements achieved with the "All" configuration. The observed performance gains validate our hypothesis that a non-uniform masking strategy enhances the model's capacity to handle the data's heterogeneous nature. §.§.§ Adjustment Factor Comparison The experimental results depicted in the table indicate a nuanced relationship between the adjustment factor in the non-uniform masking strategy and the performance on imputation and classification tasks. An optimal adjustment factor demonstrated to be around 5, results in the lowest imputation error, suggesting that a balanced representation of features is crucial for accuracy. For classification, however, increasing the adjustment factor leads to higher errors and a marginal decline in AUC, highlighting that excessive weighting may not uniformly enhance performance across different machine-learning tasks. These findings reveal the delicate interplay between feature representation adjustments and task-specific model efficacy, underscoring the importance of calibration in the non-uniform masking approach for complex time series data. §.§.§ Implementation Comparison The experimental results <ref> highlight a significant discrepancy between reported and actual performance due to the implementation nuances of masking strategies. For both BRITS and BRITS-GRU models, the Mean Absolute Error (MAE) is notably lower in the 'Incorrected' implementation across all masking ratios compared to the 'Corrected' approach. This suggests that the initial, incorrect implementation artificially inflated the models' performance metrics. Specifically, the 'Incorrected' implementation underrepresents the masking ratio, leading to less masked data during training and, consequently, higher performance. Upon correction, the MAE values rise, reflecting a more accurate depiction of model efficacy in the presence of higher data sparsity. This discrepancy underscores the critical importance of meticulous implementation practices to avoid overestimating model capabilities in handling missing data within time series. We have attached the core of the implementation code in the appendix <ref>. § DISCUSSION AND CONCLUSIONS This study has introduced a novel approach to the imputation and classification of missing data in multivariate medical time series. CSAI has demonstrated its potential to significantly improve the accuracy and reliability of medical data analysis, which is essential for high-stakes healthcare decisions.Our comprehensive experimental evaluation has provided significant insights into the efficacy of Conditional Self-Attention Imputation (CSAI) across various healthcare datasets and scenarios of missing data. While CSAI showcased promising results, surpassing established benchmarks in many instances, it is crucial to acknowledge the challenges observed with increased data sparsity, particularly in the physionet and mimic_89f datasets at a 20% masking ratio.Future work should also explore the incorporation of more granular medical knowledge, potential model enhancements to better capture long-range dependencies, and techniques to handle extreme cases of missing data. Additionally, the open-source nature of our approach encourages collaboration and iterative improvement from the research community, which is crucial for advancements in this domain. § ACKNOWLEDGMENTSThis paper represents independent research funded by the NIHR Maudsley Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. The work of Linglong Qian was supported by the Kings-China Scholarship Council PhD Scholarship Programme (K-CSC) under Grant CSC202008060096. All experiments are implemented on CREATE HPC.[King's College London. (2022). King's Computational Research, Engineering and Technology Environment (CREATE). Retrieved March 2, 2022, from https://doi.org/10.18742/rnvf-m076]§ DATA AVAILABILITYThe datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.§ ETHICS DECLARATIONS §.§ Conflict of interestThe authors declare no conflict of interest regarding the publication of this paper.§ MASKING IMPLEMENTATION | http://arxiv.org/abs/2312.16713v2 | {
"authors": [
"Linglong Qian",
"Zina Ibrahim",
"Hugh Logan Ellis",
"Ao Zhang",
"Yuezhou Zhang",
"Tao Wang",
"Richard Dobson"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231227204240",
"title": "Knowledge Enhanced Conditional Imputation for Healthcare Time-series"
} |
Phase-space analysis in non-minimal symmetric-teleparallel dark energy Orlando Luongo January 14, 2024 ====================================================================== We present analytical derivation of the minimal and maximal number of items retained in recently introduced Retroactive Interference Model of Forgetting. Also we computed the probability that two items presented at different times are retained in the memory at a later time analytically.§ INTRODUCTION Studying human memory is a complex task, since there are presumably many interacting processes that are hard to isolate. Traditionally models in psychology of memory are attempting to describe many processes at once by creating a very complicated mathematical model that have a lot of parameters, hard to analyse, and usually require a separate fitting of parameters for each experiment. Therefore, since parameters have to adjusted for each measurement, it is not clear how good these models would describe memory processes outside of laboratory settings. We have recently proposed a different type of models to describe human memory that are based on few assumptions and have zero or few parameters that describe the class of stimuli, but independent of experimental settings. These types of models, if validated, have a much broader applicability in everyday life settings. We have recently presented mathematical models for memory forgetting and retrieval <cit.>. In this publication we have asked about mathematical properties of forgetting model that may help design experiments to validate the underlying mathematical construction. In this note we provide answers to 2 questions raised in that publication regarding the forgetting model. Experimentally, forgetting is traditionally measured in the form of retention function (RC(t)) - the probability that memory is retained for time t since acquisition<cit.>. People observe that retention function monotonically decreasing with time and although there are debates on the form of retention function one of the best candidate is power function of time. One of the popular explanation of memory forgetting in humans is retrograde interference <cit.>. It assumes that new incoming memories are interacting with stored memories and cause some past memories disappear. There are different approaches to model forgetting (see for example <cit.>), but here we are concentrating on consequences of our mathematical model <cit.> for possible experimental validating of underlying assumptions behind our model. The model assumes that each incoming memory (one memory at one discrete time step) has n dimensional valence vector, with components being iid random variables. Every incoming memory is added to the memory pool, erasing all stored memories that have smaller valences in all dimensions. This model can be solved analytically, and the resulting retention curve agree well with experiment. Nevertheless, it is not clear to what extent the underlying principles holds during retention of memory items. We have posed recently several mathematical questions that may provide additional experimental tests related to this issue<cit.>. We consider the forgetting model III from <cit.>. It states that there is a retention process, where at each time step a new memory is presented to the process. Memory in the model is characterized by a vector of valences v_k ∈ℝ, k=1..n, where n is a single integer parameter of the model representing the dimensionality of the model. Valences of incoming memory is assumed to be sampled from arbitrary stationary distribution (absolutely continuous). Incoming memory erases all currently stored memories that have all valences smaller that corresponding valencies of incoming memory. Finally, incoming memory is added to stored memories. Formally, all incoming memories are described by collection of valences V = { v_k,t∈ℝ, k=1..n, t ∈ℕ}. For each time T we can define a set of stored memories M(T; V) = {t_1, ... t_|M(T)|}, t_k< T which contains presentation times (t_1, ... t_|M(T)|) of stored memories, where the cardinality of M(T; V) (|M(T; V)|) represents a number of stored memories at time T. At time T+1 a new memory with valences v_k,T+1 is presented, and the set of stored memories is updated M(T+1; V)= {T+1; t_m : t_m ∈ M(T; V), ∃ k | v_k, t_m > v_k, T+1}.One can ask what is expected value of |M(T)| that is referred to as retention curve RC_n(T) = E( |M(T; V)| ) with respect to distribution of v_k,t. It turns out that retention curve does not depend on the particular distribution of valences, since it depends only on the order statistics of memory items in each dimension<cit.>. § MINIMAL NUMBER OF RETAINED ITEMS We proposed two kind of tests that can potentially check the validity of the model <cit.>. Both are related to partialy ordered set (poset) nature of the model. For instance, one can define a partial order relationship between memories by the erasure operation - a memory A= v_k,A is larger than B= v_k,B if valences in all dimensions are larger A ≻ B ≡ v_k, A > v_k, B∀ k. In terms of recall process, it means that larger memory A will erase memory B if it is presented later in time. Due to linear extension principle <cit.>, one can permute presentation order such that all memories will be retained. For instance, permute items such that valences are sorted along dimension k in decreasing order. Consider an item at time t, since v_k,t_1 < v_k,t, ∀ t_1 > t this item will not be erased. Since this construction is valid for all t all items will be kept in memory in that order. We have asked what would be the minimum number of retained memories across all permutations of presentation order for a given memory realization (a set of v_k,t). This question is related to the number of maximal elements in the poset. The element of poset is maximal, if there is no element that is greater. In our settings that means that maximal element cannot be erased at any presentation position and the number of retained items cannot be smaller that the number of maximal elements. The first question is related to the distribution of maximal elements for a given poset. Let P(|M_n(T; V_σ(T, V) )| = k ) denote the probability of retention process M_n of dimension n at time T with presentation time permuted according to σ(T, V) has cardinality k. Than (1) The maximal number of retained items across presentation order permutations is always T for any V.(2) The distribution of minimal number of retained memories P(|M_n(T; V_σ(T, V) )| = k ) = P(|M_n-1(T; V)| = k ), where σ(T, V): |M_n(T; V_σ(T, V) )| ≤|M_n(T; V_σ'(T, V) )| ∀σ(T, V); in other words the distribution of minimal number of retained items in the model with dimension n is the same as distribution of retained items in random presentation order for model with dimension n-1. Note that averaging is done over all V, whereas for each V there is possibly different permutation σ(T, V). (1) Sort memories w.l.o.g. (without loss of generality) across dimension 1 in descending order. Then v_i,t_2 < v_i, t_1, t_1<t_2 and memory presented at t_1 cannot be erased, because valence in all later items is smaller in at least one dimension.(2) Since the erasure operation is defined with respect to order statistics we need to show that distribution of cardinalities of permuted order statistics corresponds to the distribution of cardinalities for unpermuted memory process of dimensionality one smaller. Step 1. Permutation leading to minimal number of retained items.Sort memories w.l.o.g. (without loss of generality) across dimension 1 in ascending order. We claim this presentation order is σ(T, V). For instance, take any t ∈ M_n(T; V_σ(T,V)). Any memories with t'<t cannot erase memory t if inserted after memory t, since valence in the first dimension is smaller. Memories with t'>t cannot erase memory t, because they were presented at a later time in sorted order and did not erase memory t. Therefore, each memory t ∈ M_n(T; σ(T, V)) will not be erased at any position, i.e. will be present in any temporal presentation. Step 2. Permutations in last dimensionsConsider 𝕍 a set of all permutations of V along last dimension. For any V the cardinality of |𝕍| = T! is the same for each V.|M_n-1(T; V)| = |M_n-1(T; V')| ∀ V ∈𝕍, and there is only one permutation from Step 1 that guarantee non-erasure in last dimension. Therefore, P(|M_n(T; V_σ'(T, V) )| = k ) = P(|M_n-1(T; V)| = k ) § PAIRWISE RETENTION CORRELATIONS Second type of tests for the model comes form the observation in numerical simulations that the correlations between retained items that acquired at different presentation time is non zero for dimensions greater than 1. Here we present analytical derivation of such correlations. Note, that such correlations arise purely from poset nature of the model. It is instructive, first, to derive retention function in slightly different way that was done in <cit.>. It is essentially the same calculation as in Eq(22) in <cit.> written in a different form.One can describe the forgetting process consisting of 2 Markov processes. One is related to potential erasure of the previous item due to single dimension, and the second one is related to item survival over time step due to first Markov process. In more details, there 2 states in the Markov process over time: `0' - item is erased and `1' item is still present in memory. There are also 2 states in the Markov process over dimensions: `0' - item is potentially erased by following dimension and `1' - item survived time step. For instance, suppose that memory item acquired at time t' survived until time t, if v_k,t < v_k,t' the item will survive independently of valencies relationship in other dimensions. The second Markov process has exactly n steps and the state after the last step determines whether memory survived this time step or not. For example, the state `0' indicates that all valencies of original item are smaller than currently acquired one, and item is erased at this time step if it was in memory in the previous step.§.§ Retention curve derivationSince retention function does not depend on the distribution of valences we can assume that the distribution is uniform between 0 and 1. We first consider what is the probability for the memory with valence u to be retained after T time steps and then average this probability across all realization of uLetP(u; t)= ( [ P_0 (u; t); P_1 (u; t) ]) [ ;] ,where u is n-dim valence of memory. Immediately after presentation the memory is retained, therefore P(u;0)= (0,1). In order to compute this probability vector we need to analyze what happen during a single time step. Consider the case when memory is retained at the beginning of a time step. It appears that the process of forgetting in a single time step can be described as Markov process. For instance, the memory can be retained at a time step due to arbitrary single dimension d where the valence of newly presented item is smaller than u_d. Therefore, to describe Markov process we need to keep information about probability of the memory retained or potentially erased after each dimension. Let 𝒫(u, d) =( [ 𝒫_0 (u; d); 𝒫_1 (u; d) ]) [ ;] . To write the transition matrix due to one dimension one can note that if memory is retained then item is remains retained. However, if if memory is potentially erased it can be retained when valence of newly incoming memory is smaller than u_d. The probability of this event is u_d. Therefore, we can write transition to the next dimension as𝒫(u; d)= ℰ(u; d) 𝒫(u;d-1)ℰ(u; d) = ( [ 1-u_d 0; u_d 1 ]),where initially memory is potentially erased 𝒫(u;0)= (1,0), and the probability that memory is retained after dimension n is 𝒫(u;n).One can observe that ℰ(u; d)= ( [0 -1;11 ]) ( [ 1 0; 0 1-u_d ]) ( [11; -10 ]),and therefore the probability that memory survives the time step if it was in the memory is𝒫(u; n)= ℰ(u; n) ℰ(u; n-1) .. ℰ(u; 1) ( [ 1; 0 ]) =( [0 -1;11 ]) ( [ 1 0; 0 ∏_d=1^n 1-u_d ]) ( [11; -10 ]) ( [ 1; 0 ]) =( [∏_d=1^n 1-u_d; 1- ∏_d=1^n 1-u_d ]). Returning to the first Markov process, one can observe that if the memory was erased before current time step it stays erased and that if the memory was retained before current step it will be forgotten with probability ∏_d=1^n 1-u_d. Therefore, forgetting process can be written as a Markov processP(u; t+1)=Er(u) P(u;t) Er(u) =[ [ ( [ 1; 0 ]) 𝒫(u; n) ]] =( [ 1 ∏_d=1^n 1-u_d; 0 1-∏_d=1^n 1-u_d ]),where Er is a transition matrix after one time step.Now we can compute retention process in timeP(u; T)=Er(u) P(u;T-1) =Er(u)^T P(u; 0) = ( [ 1 ∏_d=1^n 1-u_d; 0 1-∏_d=1^n 1-u_d ])^T ( [ 0; 1 ]) =( [1 -1;01 ]) ( [10;0 1- ∏_d=1^n 1-u_d ])^T ( [ 1 1; 0 1 ]) ( [ 0; 1 ]) =( [1 -1;01 ]) ( [ 1 0; 0 ( 1- ∏_d=1^n 1-u_d)^T ]) ( [ 1 1; 0 1 ]) ( [ 0; 1 ])=( [ 1-( 1- ∏_d=1^n 1-u_d)^T; ( 1- ∏_d=1^n 1-u_d)^T ]).From here _n(T)= ∫_0^1 d u_1 ... ∫_0^1 d u_n(1- ∏_d=1^n (1-u_d) )^T= ∫_0^1 d u_1 ... ∫_0^1 d u_n ∑_m=0^T (-1)^m Tm∏_d=1^n (1-u_d)^m = ∑_m=0^TTm(-1)^m/(m+1)^n§.§ Retention curve for 2 memories We start the process with only one memory with valencies n-dimensional v. After t_1 time steps another memory is presented with valence w. After t time steps from this moment we can represent the state of 2 memories as P(t; u, w)=( [ P_00 (t; u, w); P_01 (t; u, w); P_10 (t; u, w); P_11 (t; u, w) ]) [ ; ; ;],We will consider forgetting process as Markov process P(t+1; u, w)= T_t(u,w) P(t; u, w), since each step is independent, and the state of the system depends only on previous step. T_t(u,w) is the transition matrix for single time step that depends on valences of both memories.At t=0 second memory is acquired and since first memory maybe erased after t_1 time steps there 2 non-zero values P_01 and P_11 with probabilities computed in previous section and due to the presentation of second memoryP(0; u, w)=( [ 0; 1- ( 1-∏_i=1^n(1- u_i) )^t_1( 1-∏_i=1^n Θ( w_i-u_i) ); 0;( 1-∏_i=1^n(1- u_i) )^t_1( 1-∏_i=1^n Θ( w_i-u_i) ) ]),where Θ( x ) is Heaviside theta function.The transition matrix is T_t(u,w)=( [ 1 ∏_i=1^n(1- w_i) ∏_i=1^n(1- v_i) 𝒫_00 (u, w; n ); 0 1-∏_i=1^n(1- w_i) 0𝒫_01 (u, w; n); 0 01- ∏_i=1^n(1- v_i)𝒫_10 (u, w; n); 0 0 0𝒫_11 (u, w; n) ]),where 𝒫(u, w; d)=( [ 𝒫_00 (u, w; d); 𝒫_01 (u, w; d); 𝒫_10 (u, w; d); 𝒫_11 (u, w; d) ]) [ ; ; ;].The first column represents the fact that erased memory stays erased, the second and third columns deals with the processes where only one memory is retained and the last column represent forgetting step when two memories are still present. To compute 𝒫(u, w; d) we can consider another Markov process, where the initial state is `both memory are present' and potentially erased by new stimulus. In each dimension the valence of a new stimulus is either smaller than one or both items in which case corresponding memory/ies will survive this time step, or greater, in which case corresponding memory can potentially survive due to next dimension. Therefore, 𝒫_00 (u, w; n) represent probability that both memories are erased in a single step.This Markov process can be described as follows𝒫(u, w; d)= ℰ(u, w; d) 𝒫(u, w;d-1)ℰ(u, w; d) = ( [1- max(v_d, w_d) 0 0 0; (w_d-v_d) Θ( w_d-v_d) 1 - v_d 0 0; (v_d-w_d) Θ( v_d-w_d) 0 1-w_d 0; min(v_d, w_d) v_d w_d 1 ]), 𝒫(u, w; 0)=( [ 1; 0; 0; 0 ]), The last column states that item that survived previous dimensions will survive this dimension. Previous 2 columns describe the processes when one memory is survived, but second one can potentially be erased, and the first column describe survival process when both memories can potentially be erased.To simplify analysis we can consider 2 cases: when the first memory has larger valence than the second memory in dimension d w_d<v_d and the opposite case. First memory has larger valence In this case ℰ(u, w; d | u_d>w_d ) = ( [ 1-u_d 0 0 0; 0 1 - u_d 0 0; u_d-w_d 0 1-w_d 0; v_d u_d w_d 1 ]), Eigenvalues are 1, 1-u_d, 1-u_d, 1-w_d, with corresponding eigenvectors (as column vectors)U_1 = [00 -10;0 -100;001 -1;1101 ] Second memory has larger valence In this case ℰ(u, w; d | u_d<w_d ) = ( [ 1-v_d 0 0 0; w_d-u_d 1 - u_d 0 0; 0 0 1-w_d 0; w_d u_d w_d 1 ]),Eigenvalues are 1, 1-u_d, 1-w_d, 1-w_d, with corresponding eigenvectors (as column vectors)U_2 =[00 -10;0 -100;001 -1;1101 ] It is clear that effect of dimension can be tested in any order, the result would be the same. It can also be checked by direct calculation that ℰ(u, w; d_1 | u_d_1>w_d_1 ) ℰ(u, w; d_2 | u_d_2<w_d_2 ) = ℰ(u, w; d_2 | u_d_2<w_d_2 ) ℰ(u, w; d_1 | u_d_1>w_d_1 ). Therefore we can apply ℰ(u, w; d | u_d>w_d ) for all dimensions where u_d>w_d and then applyℰ(u, w; d | u_d<w_d ) for all dimensions where u_d<w_d 𝒫(u, w; n) = U_2 [ 1 0 0 0; 0 ∏_d|w_d>u_d (1-u_d) 0 0; 0 0 ∏_d|w_d>u_d (1-w_d) 0; 0 0 0 ∏_d|w_d>u_d (1-w_d) ] U_2^-1× U_1 [ 1 0 0 0; 0 ∏_d|w_d<u_d (1-u_d) 0 0; 0 0 ∏_d|w_d<u_d (1-u_d) 0; 0 0 0 ∏_d|w_d<u_d (1-w_d) ] U_1^-1[ 1; 0; 0; 0 ]Let F(x) = ∏_d|w_d<u_d (1-x_d),G(x)=∏_d|w_d>u_d (1-x_d),than (<ref>) can be written as𝒫(u, w; n) = U_2 [1000;0 G(u)00;00 G(w)0;000 G(w) ] U_2^-1× U_1 [1000;0 F(u)00;00 F(u)0;000 F(w) ] U_1^-1[ 1; 0; 0; 0 ] U_1^-1[ 1; 0; 0; 0 ]= [1;0; -1; -1 ] U_1 [1000;0 F(u)00;00 F(u)0;000 F(w) ][1;0; -1; -1 ] = [ F(u);0; F(w) -F(u); F(w) ] 𝒫(u, w; n) = [ F(u)G(w);F(u)(G(u)-G(w));G(w)(F(w)-F(u)); 1- F(w)G(w) + F(u)G(w)- F(u)G(u) ]§.§ Simplifying Markov process over time Transition matrix (<ref>) now can be rewritten explicitly (note that)∏_i=1^n(1- w_i) = F(w)G(w)∏_i=1^n(1- u_i) = F(u)G(u)T_t(u,w)=( [1 F(w)G(w) F(u)G(u) F(u)G(w);0 1-F(w)G(w)0F(u)(G(u)-G(w));001- F(u)G(u)G(w)(F(w)-F(u));000 1- F(w)G(w) + F(u)G(w)- F(u)G(u) ]), Let further simplify notation H_xy = F(x)G(y)T_t(u,w)=( [1 H_ww H_uu H_uw;0 1-H_ww0H_uu - H_uw;001- H_uuH_ww-H_uw;000 1- H_ww + H_uw- H_uu ]), It can be represented as T_t(u,w)= U_t ( [1000;0 1-H_ww00;001- H_uu0;000 1- H_ww + H_uw- H_uu ]) U_t^-1,U_t=[1 -1 -11;001 -1;010 -1;0001 ]. Therefore,P(t, t_1; u, w)=T_t(u,w) P(t-1, t_1; u, w) = T_t(u,w)^t P(0, t_1; u, w) = U_t ( [ 1 0 0 0; 0(1-H_ww)^t 0 0; 0 0 (1- H_uu)^t 0; 0 0 0 (1- H_ww + H_uw- H_uu )^t ]) U_t^-1[0; 1- ( 1-Huu )^t_1;0; ( 1-H_uu )^t_1 ] = =U_t [ 1; (1-Huu)^t_1; (1-Hww)^t; (1-Huu)^t+t_1 (1- H_ww + H_uw- H_uu )^t ]§.§ Probability of events P(t, t_1)= ∫_u, w ∈ [0,1]^ndu dw P(t, t_1; u, w) = U_t [1; ∫du dw (1-Huu)^t+t_1; ∫du dw (1-Hww)^t; ∫du dw (1-Huu)^t_1 (1- H_ww + H_uw- H_uu )^t ]=U_t [1;∑_m=0^tt+t_1m(-1)^m/(m+1)^n;∑_m=0^ttm(-1)^m/(m+1)^n; ∫du dw (1-Huu)^t_1 (1- H_ww + H_uw- H_uu )^t ]The last integral requires closer attention.P_11(t, t_1) =∫du dw (1-Huu)^t_1 (1- H_ww + H_uw- H_uu )^texpanding both brackets into finite series, and then collecting terms= ∫du dw( ∑_m_1=0^t_1t_1m_1 (-1)^m_1 Huu^m_1)( ∑_m_2+m_3+m_4+m_5=tt!/m_2!m_3!m_4!m_5! (-H_ww)^m_3 H_uw^m_4 (-H_uu)^m_5)= ∑_m_2+m_3+m_4+m_5=t∑_m_1=0^t_1t_1m_1t!/m_2!m_3!m_4!m_5! (-1) ^ m_3+m1+m_5∫du dw H_ww^m_3 H_uw^m_4 H_uu^m_1+m_5 .The integral inside the sum is a product of integrals in each dimension, therefore we can compute integral for each dimension separately. This integral can be computed as a sum of two integrals: one evaluated for triangle where u_d>w_d and another u_d<w_d.Case 1: u_d>w_da_1(m)= ∫_0^1 dw_d (1-w_d)^m_3∫_w_d ^1 d u_d (1-u_d)^m_1+m_5+m_4 = -1/(m_1+m_5+m_4+1)∫_0^1 dw_d (1-w_d)^m_3 (1-w_d)^m_1+m_5+m_4+1 =1/(m_1+m_5+m_4+1)1/m_3+m_1+m_5+m_4+2 Case 2: u_d<w_da_2(m)= ∫_0^1 du_d (1-u_d)^m1+m_5∫_u_d^1 dw_d(1-w_d)^m_3+m_4= =-1/m_3+m_4+1∫_0^1 du_d (1-u_d)^m_1+m_4 (1-u_d)^m_3+m_5+1 ==1/m_3+m_4+11/m_3+m_1+m_5+m_4+2 Finally, note that in the case when all w_d > u_d the first memory is erased immediately after presentation of second item and corresponding term a_2^n(m) need to be subtracted from computed integral)P_11(t, t_1) =∑_m_2+m_3+m_4+m_5=t∑_m_1=0^t_1t_1m_1t!/m_2!m_3!m_4!m_5! (-1) ^ m_3+m1+m_5[ ( a_1(m)+a_2(m))^n - a_2(m)^n ], a_1(m)= 1/(m_1+m_5+m_4+1)1/m_3+m_1+m_5+m_4+2, a_2(m)=1/m_3+m_4+11/m_3+m_1+m_5+m_4+2. similar correction should be added to all P_xy(t, t_1). a_1(m)+a_2(m)= m1+m3+2 m4+m5+2/(m3+m4+1)(m1+m4+m5+1) (m1+m3+m4+m5+2)We showed a construction for computing probabilities for 2 items retaining in the memory. Similarly, interaction of more memories can be considered. The number of states grows exponentially with the number of memories (2^M) and more terms will appear in the integral similar to Eq. <ref>, and the number of cases in computing an integral will grow as M!, therefore a computer program can be used to compute probabilities for larger number of memories.unsrt | http://arxiv.org/abs/2312.16086v1 | {
"authors": [
"Mikhail Katkov"
],
"categories": [
"q-bio.NC"
],
"primary_category": "q-bio.NC",
"published": "20231226151741",
"title": "Notes on Retroactive Interference Model of Forgetting"
} |
./images/references.bib | http://arxiv.org/abs/2312.16546v1 | {
"authors": [
"Ari Pakman"
],
"categories": [
"stat.CO"
],
"primary_category": "stat.CO",
"published": "20231227121737",
"title": "Super-Efficient Exact Hamiltonian Monte Carlo for the von Mises Distribution"
} |
^''k̨J. Phys. CJ. Phys.: Condens. Matter Phys. Rev.Phys. Rev. Lett.J. Phys. C: Solid State PhysJ. Phys.: Condens. MatterJ. Magn. Magn. Mater.et al. includefigsincludefigstrue includetextincludetexttrue Department of Energy Science and Engineering, IIT Bombay, Powai, Mumbai 400076, IndiaDepartment of Physics, Khalifa University of Science and Technology, Abu Dhabi 127788, United Arab EmiratesDepartment of Physics, Khalifa University of Science and Technology, Abu Dhabi 127788, United Arab [email protected] of Energy Science and Engineering, IIT Bombay, Powai, Mumbai 400076, [email protected] of Physics, Indian Institute of Technology, Bombay, Powai, Mumbai 400076, IndiaWe report a comprehensive first–principles study of the relative stability of the various possible crystal structures, and the electronic and optical properties of ternary alkali metal chalcogenides ACuX (A= Na/K and X= S/Se/Te) compounds through density functional theory (DFT) calculations. The energetics and phonon spectra of greater than 700 structures were compared, and seven possible stabilized structures of six ACuX compounds were identified using the fixed composition evolutionary search method. Our electronic band structure simulation confirms that all the ternary ACuX compounds are direct band gap semiconductors, with the band gap lying between 0.83 eV to 2.88 eV. These compounds exhibit directly allowed electronic transitions from the valence band to the conduction band, which leads to a significant strength of optical transition probability. This yields a sharp rise in the optical absorption spectra (ranging between 10^4 to 10^5 cm^-1) near the energy gap. The estimated spectroscopic limited maximum efficiency (SLME) is about 18% for an 8 μm thick NaCuTe film. For other ACuX compounds, the SLME ranges between 10% to 13%. In addition, we also explored the feasibility of these ternary ACuX compounds for photocatalytic water splitting applications and found that they can be promising candidates as photocathodes for hydrogen evolution reactions. With a large spread in the band gap and interesting band topology near Fermi level, these chalcogenides can be quite fertile for other energy applications such as thermoelectric, LED, etc.Ternary Alkali Metal Copper Chalcogenides ACuX (A= Na, K and X= S, Se, Te): Promising Candidate for Solar Harvesting Applications Aftab Alam January 14, 2024 ================================================================================================================================= § INTRODUCTIONIn recent years, multinary chalcogenide semiconductors have witnessed extensive applications in the development of thin-film solar photovoltaic (PV) devices. For example, CuIn_1-x(Se,Se)_2 (CIGS), Cu_2ZnSnS_4, CdTe, SnS, Sb_2S_3, etc. have shown immense promise in thin film solar cells <cit.>. Among them, CdTe and CIGS are most prevalent due to their direct band gaps and high light absorption capabilities, enabling them to achieve notable solar cell efficiencies<cit.>.It is however also known that CdTe and CIGS come with certain drawbacks. They contain toxic elements, namely Cd and Te, as well as elements that are relatively scarce in the earth's crust, such as Te, In, and Ga, which is a major concern that could limit market feasibility for sustainable, cost-effective, and large-scale manufacturing. Thus, identifying efficient light absorber materials in thin film-based PV devices with band gaps in the visible range is a crucial challenge. Apart from this, there are few other major issues in existing solar absorbers employed in PV devices are e.g., organic moieties that cause degradation over short carrier lifetimes (i.e., organic/inorganic halide perovskites <cit.>), and the presence of defect states causing high recombination rate <cit.>. As such, finding alternative materials is an urgent need of the hour. The material should acquire few prerequisite criteria such as low cost (abundant), high carrier lifetime, high absorption and defect tolerance. Recently, alkali metal ternary copper telluride ACuTe (A=Na, K) has been proposed to be efficient light absorber for thin film solar cells, satisfying most of the above criteria<cit.>.They are reported to be direct band gap semiconductors having band gap values of 1.43 eV and 1.63 eV for NaCuTe and KCuTe, respectively. The high absorption coefficient (∼ 10^4 cm^-1) in the visible range and a reasonably high carrier lifetime due to low deep level defects suggest their potentiality as efficient solar absorbers. The high abundance of constituent elements is an added advantage. This has motivated us to further investigate the feasibility of other chalcogenides (S and Se) based ternary compounds as solar absorbers.The ternary alkali metal copper chalcogenides ACuX (A=Na, K and X=S, Se, Te) have been experimentally synthesized by Savelsberg and Schfer in 1978 <cit.>. They found that KCuS crystallizes in orthorhombic structure (space group Pna2_1), while KCuSe/Te has a hexagonal structure with space group P6_3/mmc. Similarly, NaCuSe/Te is reported to crystallize in tetragonal structures with space group P4/nmm. Since then, no further experimental as well as theoretical studies have been reported on this class of compounds. Recently, Vaitheeswaranet al.<cit.> studied the structural, electronic, and optical properties of ternary KCuSe and KCuTe using ab-initio calculations. They reported these compounds to be semiconductors with suitable band gaps that could be promising as solar absorbers in PV, photodetectors, and other optoelectronic device applications. On a similar line, Boualleg et al.<cit.> reported the phase transition and thermal properties of KCuSe and KCuTe. They reported these two compounds to stabilize in orthorhombic and tetragonal phases, respectively in contrast to the experimental hexagonal structure. The dynamical stability study also confirmed the existence of other possible crystal structures for KCuSe and KCuTe compounds. For other ternary chalcogenides i.e., NaCuSe, NaCuTe, and KCuS, no further studies (both experiment and theory) are reported reassessing their crystal structures. NaCuS is never reported earlier and hence its crystal structure is not known. The conflict between experimental and theoretically predicted crystal structures and the prediction of the most stable structural phases needs to be addressed. In addition, a thorough study to investigate the potential of all these ternary compounds for various solar harnessing applications can be extremely useful.In this work, we aim to present a detailed insight into the structural stability, electronic, and optical properties of ternary ACuX compounds assessing their potential as efficient light absorber using first–principles Density Functional Theory (DFT)calculations. In particular, we utilized a fixed composition evolutionary search method to accurately predict the correct crystal structure of all these six compounds. The method generates greater than 700 structures (depending on the system) and scrutinize the most stable out of them based on certain factors. This has been a powerful method to predict ground state structures for a given system. Interestingly, though the crystal structure of a few systems matches with those predicted experimentally, there are other systems where the most stable structure is few meV lower in energy as compared to the experimental ones. Lattice dynamics calculations are also performed in parallel to evaluate the dynamically stable structures for each compound. We further simulated the electronic and optical properties of all six ACuX (A=Na/K, X=S/Se/Te) compounds in their most stable phases. All six ACuX compounds are found to be direct band gap semiconductors with band gap values ranging between 0.83 eV to 2.88 eV. These band gaps are simulated using the most accurate hybrid exchange-correlation functional. The optical simulation confirms the anisotropic nature of the absorption spectra having different in-plane and out-of-plane values, with absorption coefficient ∼10^4 cm^-1 in the visible range. The theoretically predicted spectroscopic limited maximum efficiency (SLME) is found to lie in the range 10 % to 18 % for a thickness of 8 to 10 μm under 1 sun illumination AM1.5G. Further, we evaluated the potential of all stable ACuX compounds as photocathodes for water splitting applications based on their band edge positions with respect to water redox potential. § COMPUTATIONAL DETAILSThe fixed composition evolutionary search method is utilized for predicting the crystal structure, as employed in the USPEX package<cit.>.An initial population of 20 structures is generated randomly<cit.>, while the subsequent generations are produced due to variation operators, which include heredity (50%), random symmetric structure generation (30%), and soft mutation (20%). Subsequently, a comprehensive exploration is carried out by generating more than 700 crystal structures depending on the six compounds. Our enthalpy calculations based on DFT identify optimal structures from each generation within the selection criterion. The procedure for local optimization is executed in a five-step process utilizing the generalized gradient approximation (GGA)<cit.> within the Perdew-Burke-Ernzerhof functional (PBE)<cit.>, implemented in the Vienna Ab–initio Simulation Package (VASP) <cit.>. For a more accurate calculation of the electronic structure (band gap), the screened hybrid HSE06 functional <cit.> was employed. The kinetic energy cut–off for the plane wave basis set was set to 520 eV. The Brillouin zone (BZ) integration was done using a Γ–centered scheme with 12×12×6 k–meshes for ionic relaxations and 16×16×8 for self-consistent-field calculations. The density of states (DOS) was calculated using the tetrahedron method with Blöch corrections<cit.>. Electronic band structure within the HSE06 was simulated using 8×8×4 k–mesh. All the atoms in the unit cell are fully relaxed using the conjugate gradient method until the force (energy) converges below 0.001 eV/Å (10^-7 eV). The phonon spectra are calculated using the supercell approach as implemented in the phonopy package <cit.>. Effect of spin–orbit coupling (SOC) was included for all the compounds.Formation energy (Δ E_F) of ACuX (A; Na/K and X; S/Se/Te) were calculated using the following expression, Δ E_F=E_Tot-[nE_A-lE_Cu-mE_X]/Nwhere “E_Tot” is the total energy of the target compound, “n”, “l”, and “m” are the numbers of Na/K, Cu, and S/Se/Te atoms present in the compound, while N is the total number of atoms present in the cell. E_A, E_Cu, and E_X are the total energies per atom of the constituent elements in their respective bulk equilibrium structures.To simulate the optical properties, the frequency–dependent dielectric constants were calculated within the independent particle approximation (IPA) <cit.>, as implemented in VASP. To measure the theoretical maximum possible efficiency, we have used an improved version of the Shockley-Queisser (SQ) efficiency limit known as spectroscopic limited maximum efficiency (SLME) proposed by Yu et al.<cit.> using SL3ME code<cit.>. More details about the theoretical formulation of optical calculations andSLME can be found in Sec I of the supplementary information (SI)<cit.>.§ RESULTS AND DISCUSSION§.§ Energetics and Dynamical Stability of ACuXOut of all the structures (more than 700 structures) generated by USPEX package <cit.>, we selected nine structures with the lowest formation energies for each ternary ACuX (A= Na, K and X=S, Se, and Te) compounds. The space groups (SG) of these nine structures are Pnma, Pna2_1, P6_3/mmc, P4/nmm, Cmcm, P2_1/m, F4̅3m, P6_3mc, and C2/c, respectively. The prototype crystal structures of such space groups are shown in Fig. S1 of the SI <cit.> and their optimized lattice parameters and corresponding formation energies obtained from DFT calculations for all the six different ACuX compounds are listed in Table S1 to S6 in SI <cit.>. Table <ref> display the space group and formation energies (Δ E_F) of the experimentally reported crystal structures and the energetically most stable simulated crystal structures and their energy difference (Δ). It is observed from Table <ref> that NaCuS (never studied before) crystalizes in the Pna2_1 space group. However, energetically NaCu(Se/Te) should stabilize in hexagonal structure (SG: P6_3/mmc), whose Δ E_F is ∼17-20 meV/atom lower than those of experimentally reported tetragonal structure (space group P4/nmm). KCuS was found to stabilize in the orthorhombic structure having space groups Pna2_1, which agrees with the experimental prediction<cit.>. In contrast, KCuSe is found to stabilize in the orthorhombic crystal structure, which is 12 meV/atom lower in energy than the experimentally reported hexagonal structure<cit.>. KCuTe stabilizes in the hexagonal structure (SG:P6_3/mmc), which matches with the experimentally findings<cit.>. To further assess the phase stability, specially for NaCuSe, NaCuTe and KCuSe compounds where the experimental structure is higher in energy (by 12 to 20 meV/atom) as compared to the theoretically optimized one, we simulated their phonon dispersion which will help to evaluate the dynamical stability. The phonon dispersion for all the experimental and theoretically optimized ACuX compounds in different space groups (as shown in Table <ref>) are shown in Fig. S2 of SI<cit.>. For NaCuS (SG: Pna2_1), NaCuSe/Te (SG: P6_3/mmc), and all the KCuX compounds in their respective space groups, the phonon frequencies are found to be positive, confirming the dynamical stability for all these compounds. A very small imaginary frequency appears at/around the Γ point in the orthorhombic structures of KCuSe (SG: Pnma), which may be due to the limited supercell size used in this calculation. Interestingly, the phonon spectrum for the experimentally reported NaCuSe and NaCuTe (SG: P4/nmm) show appreciable imaginary frequencies (see Fig. S2(b and d) of SI<cit.>), indicating their instability in tetragonal phase. Most likely, they crystallize in hexagonal crystal structure, i.e., P6_3/mmc space groups, which is favored by both formation energy and phonon dispersion data.We believe the structural characterization of these two systems should be revisited. In summary, based on the formation energies and phonon spectra, we have chosen seven most stable ternary ACuX compounds, i.e., orthorhombic NaCuS and KCuS (SG: Pna2_1), hexagonal NaCuSe/Te and KCuSe/Te (SG: P6_3/mmc) and the orthorhombic & hexagonal KCuSe (SG: Pnma) to further explore their optoelectronic properties.§.§ Electronic StructureAll the ternary ACuX compounds were theoretically optimized in their respective stable structures to calculate the electronic properties. The prototype crystal structures are shown in Fig <ref>, and the crystallographic parameters, such as lattice constants and bond lengths are tabulated in Table <ref>. The optimized structural details are in fair agreement with the available experimental <cit.> as well as theoretically <cit.> reported data. For ACuSe/Te which stabilizes in the hexagonal structure, Cu and Se/Te ions form honeycomb (hexagonal) layers along the c-direction, and between each hexagonal bilayer, there exists a triangular lattice of Na/K atoms (see Fig. <ref>(a)). The Na/K atoms are situated on top of the center of the honeycomb lattice, while Cu/Se/Te atoms are positioned on top of the center of the triangular lattice. In contrast, the lattice arrangement in orthorhombic structures for both the space groups (for Pna2_1 and Pnma) is slightly different, as shown in Fig <ref>(b and c). In both the cases, the X-Cu-X shows a linear chain with an angle of 180^∘. The comparative band structure plots with and without spin–orbit coupling (SOC) within the PBE level are shown in Fig. S3 of SI<cit.>. As evident from Fig. S3, there is no change in band gap as well as band dispersion for S- and Se-based compounds. However, Te, a heavy element, slightly changes the band gap apart from the minor band splitting due to the SOC effect. For instance, E_g for NaCuTe changes from 0.49 eV (w/o SOC) to0.36 eV (with SOC), and that for KCuTe changes from 0.52 eV (w/o SOC) to 0.41 eV (with SOC) in P6_3/mmc structure. Our PBE-SOC results for KCuSe and KCuTe corroborate well with other theoretical reports<cit.>. As the SOC effect is not too significant, yet the band gap is conventionally underestimated within the PBE functional, we further performed the electronic structures of all the compounds using HSE06 functional without the SOC effect for more accurate band gap estimation. Figure <ref> shows the electronic band structures and the optical transition probability of all the ACuX compounds using hybrid (HSE06) functionals. All the compounds are direct band gap semiconductors in which the valance band maximum (VBM) and conduction band minimum (CBM) are located at Γ–point. Table <ref> display the band gap (E_g) values simulated using both PBE and HSE06 functionals. We found that the E_g lies in the range 0.83 eV to 2.88 eV for different compounds. Clearly, E_g for NaCuSe (1.21 eV), KCuSe (1.28 eV), and KCuTe (1.62 eV) are most suitable for photovoltaic applications. It is also observed that the VBM at the Γ-point is doubly degenerate for crystal structures belonging to hexagonal symmetry i.e., P6_3/mmc of ACu(Se/Te) (Fig. <ref>(b,c,e,g)). A flat valence band edge is also observed in all the compounds, which can be very helpful for promising carrier transport due to their high effective masses. Figure <ref> shows the orbital projected partial density of states (PDOS) for all the ACuX compounds in the respective structures shown in Fig. <ref>. The PDOS reveals that the VBM mainly consists of the p- and d-orbitals of Cu and the p- orbital of S/Se/Te atoms at/near E_f for all the cases. The CBM is mainly contributed by the s-orbital of Na/K,p- and d-orbitals of Cu, and the s- and p-orbitals of S/Se/Te, respectively. It is also observed that the electronic states near E_f for alkali metals (Na or K) show a minimal contribution at VBM. The contributions of s-orbitals of alkali metals decrease near E_fas we move from ACuS to ACuSe and ACuTe. Unlike several other chalcogenide based compounds, the E_g value in the present case first decreases as we go from Na/KCuS to Na/KCuSe, and then increases for Na/KCuTe. For example, E_g for NaCuS, NaCuSe and NaCuTe are 2.42 eV, 0.83 eV, and 1.21 eV respectively and a similar trend is obtained for KCuX systems as well. This is mainly attributed to the nature of hybridization between Cu and chalcogen atoms. The electronegativity values for differnt atoms are χ_Na= 0.93, χ_K= 0.82, χ_Cu= 1.9, and χ_S/Se/Te= 2.58/2.55/2.1, respectively. As chalcogen atoms are more electronegative than alkali metals, there is always an electronic charge transfer from the Na/K atom to S/Se/Te atoms. Similarly, the charge is also transformed from Cu atoms to S/Se atoms. However, a negligibly small charge transfer between Cu and Te atoms can be expected because their electronegativity is almost similar (the difference is 0.2). This is also reflected in their orbital PDOS plots.In order to better understand the role of hybridization in dictating the E_g trend, we show a zoomed-in view of the orbital projected PDOS above CBM forNaCuS, NaCuSe, and NaCuTe, respectively in Fig. S4(a,b,c) of SI<cit.>. A careful inspection of these plots clearly differentiate the orbital hybridization between Cu and S/S/Te atoms at the CBM side. The electronic states of the p- and d- orbitals of Cu and the p orbitals of S atoms mainly contribute to the CBM side of NaCuS. Similarly, the CBM of NaCuSe consists of the s- and d-orbitals of Cu and the s-orbital of Se. However, the nature of orbital hybridization in NaCuTe is quite different as compared to NaCuS and NaCuSe. Due to minimal charge transfer between Cu and Te, the orbital contributions from Cu–p, Cu–d, and Te–s are diminished near E_f and indicate only a strong hybridization between Na–s, Cu–s, and Te–p at the CBM side of NaCuTe, causing an increase in the band gap. Similar band gap trend and the orbital hybridization are also observed between Cu and Te atoms in KCuX compounds (see Fig. S4(d,e,f) of SI <cit.>) and can be explained on a similar ground. An analogous band gap trend is also observed in other chalcogenide based compounds like CdS, CdSe, and CdTe as well as ZnS, ZnSe, and ZnTe, respectively<cit.>.§.§ Optical PropertiesThe band structure calculations established that all the ACuX compounds are direct band gap semiconductors, with the band gap falling in the range 0.83 eV to 2.88 eV. This motivates us to further study the optical properties of these systems and evaluate their efficacy as solar absorber. Along with the absorption coefficient, we have calculated another screening parameter i.e., spectroscopic limited maximum efficiency (SLME) proposed by Yu et al.<cit.>. SLME gives an upper bound on the solar efficiency by incorporating the nature/magnitude of band gap and absorption coefficient for a particular compound. It is is an improvised version of the Shockley–Queisser (SQ) efficiency limit. Simulation of SLME also require the information about the possibility of optical transition from VBM to CBM for all the ACuX compounds. To obtain this, we have calculated the transition probability (p^2) by calculating the square of transition dipole matrix elements. The transition probability for all the ACuX systems are shown in Fig. <ref> (below each band structure plot). Clearly, direct optical transition from VBM to CBM is allowed for all the hexagonal structures of ACuSe/Te due to the finite p^2 values at the high–symmetry point Γ. The optical transitions are allowed due to the same parity transition of electron states from p–states of S/Se/Te atoms at the VBM side to s– and p– states of Cu at the CBM side and d–states of Cu at VBM to s–states of S/Se/Te at CBM. In contrast, the p^2 value is found to be zero at Γ–point forNaCuS, KCuS, and KCuSe (orthorhombic structures) in Fig. <ref>(a,d and f), indicating optically forbidden transition from VBM to CBM.The absorption coefficient (α) of a given material is a quantifiable descriptor which dictate the penetration extent of a photon (with aparticular wavelength) into the material before it gets absorbed. For a suitable solar absorber, a sharp rise in the absorption coefficient is obtained after the incidence of photon energy closer to its band gap. α is related to the dielectric function and can be calculated using the following expression:α(E)=√(2)ω/c[√(√(ε_1^2(ω)+ε_2^2(ω)))-ε_1)] where E is the incident photon energy, ω is the angular frequency related to photon energy via E=ħω (ħ is the reduced Planck’s constant), and c is the speed of light in vacuum, respectively. ε_1 and ε_2 are the real and imaginary parts of the dielectric function.Figure <ref>(a,b) shows the absorption spectra plot for all ACuX compounds along the x–(left) and z–polarization (right) directions (arising out of structural anisotropy in hexagonal and orthorhombic structures). For orthorhombic systems (NaCuS, KCuS, KCuSe), the absorption spectra along the y–polarization direction are shown in Fig. S5 of SI<cit.>.The absorption onset for all the compounds is shifted to the band gap obtained from hybrid HSE06 calculations. Clearly, the absorption coefficients are not same along the three polarization directions because of the associated structural anisotropy. For example, in orthorhombic systems, the absorption coefficient of NaCuS and KCuS is higher (∼ 0.6 to 0.9×10^5 cm^-1) along the z–polarization direction than the x– and y–direction. However, the absorption coefficient of orthorhombic KCuS is high along y–polarization direction with respect to x– and z–direction (see Fig. S5). Similarly, for the hexagonal structures i.e., KCuSe, KCuTe, NaCuSe, and NaCuTe, the simulated absorption coefficients fall in the range 0.2×10^5 to 0.6×10^5 cm^-1 along x– and y–polarization directions, with a relatively small contribution arising from the z–directionas shown in Fig. <ref>(b). Next, we have calculated the thickness dependence of SLME of all the ACuX compounds, considering the application for thin film solar cells. We have chosen four compounds i.e., hexagonal ACu(Se/Te) (A=Na, K) because of their allowed optical transition and high absorption coefficients in the visible region. The thickness dependence of SLME is shown in Fig <ref>(a). Clearly, with increasing thickness, the SLME increases and then saturates beyond a certain thickness. The maximum SLME is found to be 18 % for a 10 μm thick NaCuTe film. Similarly, for the rest of the ACuX systems, the SLME lie between ∼10 to 13%. Based on the optical and SLME descriptors, it is reasonable to consider that all the selenide and telluride based ACuX systems can be potential candidates for light absorbers in solar PV devices.§.§ Potential Application in Photoelectrochemical (PEC) Water SplittingWe further investigate the feasibility of the ternary ACuX compounds for photocatalytic water splitting applicaions. We calculated the band edge positions with respect to the water redox levels<cit.> using the empirical model proposed by Butler and Ginley<cit.>. According to their model, the valence and conduction band edge positions can be obtained from the following expression,E_VBE/CBE = E_0+(χ_Aχ_Cuχ_X)^1/3±E_g/2where E_0 is the difference between a normal hydrogen electrode (NHE) and vacuum whose value is -4.5 eV. χ is the electronegativities of the constituent elements in the Mulliken scale, and E_g is the band gap. Figure <ref>(b) shows the band edge positions for all the stable ACuX compounds. For a compound to be promising for PEC water splitting, the CBM must locate more negative than the redox potential of H^+/H_2 (0 V vs. NHE), and the VBM must align more positive than the redox potential of O_2/H_2O (1.23 V vs. NHE) at ambient condition<cit.>. As evident from Fig. <ref>(b), it is clear that all ternary copper chalcogenides have a well positioned conduction band edge to be used as photocathode for hydron evolution reaction (HER) in a PEC cell. § CONCLUSIONIn conclusion, we have systematically investigated the chemical/dynamical stability and optoelectronic properties of ternary ACuX (A=Na, K; X=S, Se, Te) chalcogenides using first–principles calculations. The structural stability metrics ensure that NaCuS, KCuS and KCuSe crystallize in orthorhombic structure while NaCuSe/Te and KCuTe stabilize in hexagonal phase. Interestingly, few of these compounds show intriguing energetics/phonon trends, which suggests them to stabilize in structure(s) other than that predicted experimentally (one report only). This demands for a revisit of the crystal structure prediction. All the compounds are direct band gap semiconductors having band gaps laying between 0.83 eV to 2.88 eV. The direct optical transitions are forbidden for orthorhombic systems (NaCuS, KCuS, and KCuSe), while it is allowed for the remaining systems (hexagonal phase). The absorption coefficient for the optically allowed ACuX compounds lie in the range ∼(0.2 to 0.6)×10^5 cm^-1 in the visible region. The highest simulated efficiency for the NaCuTe is determined to be 18% for an 8 μm thick film.In addition, the favorable straddling of valence band edges with respect to the normal hydrogen electrodes confirms the suitability of all these compounds as photocathodes for the hydrogen evolution reaction (HER) in the photoelectrochemical (PEC) process. The present study suggests that the ternary alkali metal-based copper chalcogenides can be suitable candidates as light absorbers in PV cells as well as photocathodes for HER in photocatalytic water splitting applications, requiring further experimental investigation.§ ACKNOWLEDGEMENTSG.B. would like to thank the Council of Scientific and Industrial Research (CSIR), India, for providing senior research fellowship. N.S. acknowledge the financial support from the Khalifa University of Science and Technology under the Emerging Science & Innovation Grant ESIG-2023-004 and the contribution of Khalifa University's high-performance computing and research computing facilities to the results of this research. | http://arxiv.org/abs/2312.16063v1 | {
"authors": [
"Gurudayal Behera",
"Surabhi Suresh Nair",
"Nirpendra Singh",
"K. R. Balasubramaniam",
"Aftab Alam"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231226142727",
"title": "Ternary Alkali Metal Copper Chalcogenides ACuX (A= Na, K and X= S, Se, Te): Promising Candidate for Solar Harvesting Applications"
} |
]Yalin E. Sagduyu and Tugba Erpek[]Virginia Tech, Arlington, VA, USA Email: {ysagduyu, terpek}@vt.edu Adversarial Attacks on LoRa Device Identification and Rogue Signal Detection with Deep Learning [ January 14, 2024 =============================================================================================== Low-Power Wide-Area Network (LPWAN) technologies, such as LoRa, have gained significant attention for their ability to enable long-range, low-power communication for Internet of Things (IoT) applications. However, the security of LoRa networks remains a major concern, particularly in scenarios where device identification and classification of legitimate and spoofed signals are crucial. This paper studies a deep learning framework to address these challenges, considering LoRa device identification and legitimate vs. rogue LoRa device classification tasks. A deep neural network (DNN), either a convolutional neural network (CNN) or feedforward neural network (FNN), is trained for each task by utilizing real experimental I/Q data for LoRa signals, while rogue signals are generated by using kernel density estimation (KDE) of received signals by rogue devices. Fast Gradient Sign Method (FGSM)-based adversarial attacks are considered for LoRa signal classification tasks using deep learning models. The impact of these attacks is assessed on the performance of two tasks, namely device identification and legitimate vs. rogue device classification, by utilizing separate or common perturbations against these signal classification tasks. Results presented in this paper quantify the level of transferability of adversarial attacks on different LoRa signal classification tasks as a major vulnerability and highlight the need to make IoT applications robust to adversarial attacks. LoRa, IoT, deep learning, wireless signal classification, device identification, rogue signal detection, adversarial attacks, adversarial machine learning. § INTRODUCTIONLoRa communication technology offers long-range connectivity, low power consumption, extended battery life, and scalability, making it a cost-effective solution for diverse Internet of Things (IoT) applications based on Low Power Wide Area Networks (LPWAN) such as asset tracking, industrial automation, surveillance, smart city, smart home, and supply chain management <cit.>. While LoRa offers several advantages, it also poses various security challenges. LoRa networks may be vulnerable to unauthorized access such that adversaries can potentially gain access to the network, intercept or manipulate data, and disrupt the communication between devices. LoRa signals can be intercepted and eavesdropped over the air. In replay attacks, adversaries may capture and retransmit legitimate LoRa signals to deceive the network, or they may spoof or impersonate legitimate devices of a LoRa network to gain unauthorized access, inject malicious data, or disrupt network operations. To that end, it is essential to characterize the attacks surface for LoRa. Machine learning plays a pivotal role inwireless signal classification applications within LoRa networks. The unique advantage of deep learning lies in its ability to automatically extract high-level features from raw signal data, enabling accurate and efficient classification <cit.>. In the context of LoRa, deep neural network (DNN) models such as convolutional neural networks (CNNs) and feed-forward neural networks (FNNs) can effectively differentiate between different types of signals by learning intricate patterns and representations due to the complexity and variability of wireless signals. By training on real-world in-phase and quadrature (I/Q) data for LoRa signals, the DNNs fostered by recent computational advances can capture nuanced signal characteristics, leading to improved classification accuracy <cit.>. Wireless signals can be analyzed for multiple tasks at a receiver <cit.>. In this paper, we consider two signal classification tasks for LoRa networks,namely the task of distinguishing between devices with RF fingerprinting on the received signals and the task of detecting spoofed signals that mimic the LoRa signal characteristics. Synthetic signal generation for spoofed communications by adversaries serves the purpose of deceiving spectrum monitors and potentially bypassing authentication mechanisms <cit.>. We consider generating these synthetic signals with the Kernel Density Estimation (KDE) method. KDEestimates the probability density function (PDF) of received signals from rogue devices. By modeling the distribution of legitimate LoRa signals, KDE generates synthetic signals that closely mimic the statistical properties of the original signals. This way, adversaries can potentially infiltrate the authentication process and compromise the overall security of the LoRa network.The complex decision space of wireless signal classification is sensitive to variations in test input samples and therefore vulnerable to adversarial attacks (evasion attacks) <cit.>. Adversarial attacks craft small, malicious perturbations in the input signals in test (inference) time to deceive the DNN models, resulting in incorrect device identification or the misclassification of legitimate and rogue signals, and potentially leading to the infiltration of user authentication mechanisms based on RF fingerprinting.Addressing the challenges posed by adversarial attacks is crucial to ensure the integrity and security of LoRa networks, enabling trustworthy and dependable wireless communications in various IoT applications.In this paper, we study adversarial attacks on wireless signal classification tasks for LoRa. We consider untargeted attacks that aim to disrupt the overall model performance without specifying the target class for each task. We use Fast Gradient Sign Method (FGSM) to generate adversarial inputs by calculating the gradient of the loss function with respect to the input data and perturbing the input in the gradient sign's direction to maximize the model's loss <cit.>.Fig. <ref> shows the system model. There are two LoRa devices transmitting and potentially two rogue devices mimicking these transmissions. The LoRa receiver performs two tasks, namely, distinguishing between legitimate transmission from two LoRa devices and distinguishing between legitimate and rogue LoRa device transmissions. There is also an adversary that transmits perturbation signals as part of the adversarial attack. We assess the impact of the adversarial attack on the performance of these two tasks. To that end, we analyze transferability of adversarial attacks in the sense that adversarial examples crafted to deceive one task's model can also mislead other task models, even if those models were trained on different datasets. In other words, the adversarial perturbations generated for one model tend to generalize and remain effective across multiple models as the underlying vulnerabilities that are exploited by adversarial attacks are not unique to a specific model but can be present in multiple models due to similar characteristics in their decision boundaries or loss landscapes. The effects of wireless channels and surrogate models on transferability have been studied in <cit.> for a single task. In this paper, we analyze the extent that the adversarial attack for one task transfers to the other task that operates on the same LoRa signal data. We show that the attack performance drops significantly when there is a mismatch between the model under attack and the model for which the perturbation was derived. As a remedy, we utilize a hybrid approach that generates common perturbation by utilizing the gradients of the loss function for multiple classifiers. We show that this attack is highly effective regardless of the classifier model and its type of DNN under attack. The rest of the paper is organized as follows. Section <ref> describes LoRA device identification. Section <ref> studies detection of rogue LoRa signals. Section <ref> presents adversarial attacks on LoRa signal classification tasks.Section <ref> concludes the paper. § DEVICE IDENTIFICATION FROM LORA SIGNALS We use the I/Q data reported in <cit.>. This data is collected from real LoRa devices, namely, Pycom IoT devices, as transmitters and software-defined radios (SDRs), namely, USRP B210 receivers, as receivers. The LoRa devices operate in outdoor environment at a center frequency of 915MHz, used for recording the received signals sampled at 1MS/s. The LoRa configuration includes the Raw-LoRa mode, the channel bandwidth of 125kHz, the spreading factor of 7, the preamble of 8, the TX power of 20dBm, and the coding rate of 4/5. The outdoor data in this dataset is collected over 5 consecutive days with 10 transmissions, each of length 20 sec. for each IoT transmitter. The distance between the transmitter and receiver is set as 5m for each experiment.One task we consider with the use of deep learning is to distinguish between transmissions from two LoRa devices.Each data sample as input to the DNN is of size (2,32) corresponding to 32 I/Q (wireless signal) samples. 5000 samples are generated. 80% of the samples are used for training and 20% of them are used for testing. We consider both CNN and FNN classifiers for this task. Their architectures are given in Table <ref>. FNN has 6,522 parameters and CNN has 70,074 parameters. In training of each DNN, we use categorical cross-entropy as the loss to be minimized and Adam <cit.> as the optimizer. Table <ref> shows the accuracy of device identification (classification of Device 1 vs. Device 2) using legitimate LoRa device transmissions. In all cases, the accuracy is high. We note that CNN improves performance compared to FNN in terms of average accuracy as well as accuracy of detecting Device 1 or Device 2.§ SIGNAL SPOOFING BY ROGUE DEVICES For signal spoofing, the adversary uses KDE as a non-parametric technique to estimate the PDF of a random variable based on observed data. For each observed data point, a kernel function is centered at that point, and its contribution to the PDF estimate is calculated based on the chosen kernel and bandwidth. The kernel function determines the shape of the kernel. The bandwidth determines the width of the kernel, affects the smoothness, and balances between capturing fine details and avoiding oversmoothing of the estimated PDF. The kernel contribution is a scaled version of the kernel function evaluated at the given data point. Then, the individual kernel contributions are summed up to obtain the final estimated PDF. This summation process ensures that the estimated PDF is smooth and continuous. By applying KDE to observed data, the resulting estimate provides a representation of the underlying probability distribution. KDE allows for the generation of synthetic samples that follow the statistical properties of the observed data, making it a useful tool for generating synthetic signals that closely resemble legitimate LoRa signals. Given a set of observed data points (x_1, x_2, …, x_n), the KDE estimate f(x) of the underlying PDF at a point x is calculated asf(x) = 1/nh∑_i=1^n K(x - x_i/h),where K is the kernel function, h is the bandwidth, x_i represents each observed data point, and n is the total number of observed data points. The KDE estimate f(x) at a particular point x is obtained by summing up the scaled contributions of the kernel function evaluated at each observed data point x_i. The scaling factor 1/nh ensures that the estimated PDF integrates to 1 over the entire domain. To generate spoofed signals, we use Gaussian distribution as the kernel function and 10^-3 as the bandwidth. Rogue Device 1 (that mimics legitimate Device 1) transmits with more than 2dB difference from legitimate Device 1, while rogue Device 2 (that mimics legitimate Device 2) transmits with less than 1dB difference from legitimate Device 2. Each rogue device has up to π/30 phase difference from the respective legitimate device. Fig. <ref> shows the constellation for the legitimate and rogue devices 1 and 2.We use the Jensen-Shannon divergence (JSD) to evaluate the fidelity of synthetic data generated by KDE. JSD quantifies the similarity between two probability distributions by calculating the average of the Kullback-Leibler (KL) divergences between each distribution and the average distribution. Given probability distributions P_i and P̂_i for legitimate device and rogue device i=1,2, respectively, the JSD is computed asJSD(P_i, P̂_i ) = 1/2(KL(P_i || M_i ) +KL(P̂_i || M_i ) ),where KL(P||Q) = ∑_x ∈𝒳 P(x) log( P(x)/Q(x))represents the KL divergence between discrete probability distributions P and Q defined on the same sample space 𝒳, and M_i = 1/2(P_i+P̂_i ). The JSD ranges between 0 and 1, with 0 indicating that the two distributions are identical, and 1 indicating that the distributions are completely dissimilar. Therefore, a lower JSD value signifies a higher fidelity between the synthetic data generated by KDE and the original observed data.For test data, JSD(P_1, P̂_1 ) = 0.0092 for Device 1 and JSD(P_2, P̂_2 ) = 0.0099 such that the average JSD is 0.0096 for both devices. This low JSD suggests that the spoofed signals closely resemble the original LoRa signals in terms of their statistical characteristics, indicating a high fidelity of LoRa signal spoofing by rogue devices. The DNN architectures shown in Table <ref> are also used for training classifiers to detect spoofed signals. In Section <ref>, we defined the task of classifying Device 1 vs. Device 2 using legitimate LoRa device transmissions. In this section, we consider three more tasks:task of classifying legitimate vs. rogue devices using Device 1 and Device 2 transmissions, task of classifying Device 1 vs. Device 2 from legitimate and rogue device transmissions, and task of classifying Device 1 vs. Device 2 using rogue device transmissions only. Tables <ref> and <ref> show the accuracy when CNN and FNN are used as the DNN, respectively.For each task, the accuracy is high both on average and for the case of detecting devices of individual labels (legitimate device vs. rogue device or Device 1 vs. Device 2). Overall, the accuracy is higher when devices are classified as legitimate device vs. rogue device compared to the case when devices are classified as Device 1 vs. Device 2. For the latter case of device classification, the accuracy drops when both legitimate and rogue device transmissions are used instead of using either legitimate or rogue device transmissions separately.§ ADVERSARIAL ATTACK FGSM is an effective technique used for crafting adversarial examples in deep learning models. It is a one-step attack that generates adversarial perturbations by leveraging the gradient information of the loss function with respect to the input data. The FGSM attack aims to maximize the loss of the model by perturbing the input data in the direction of the gradient sign. The perturbation of the FGSM attack is generated by selecting an input sample to generate an adversarial example and calculating the gradient of the loss function with respect to the input data by backpropagating through the model. The gradient information determines the direction in which the input needs to be perturbed. This direction is computed as the sign of the gradient. Then, the gradient sign is scaled by a small magnitude (ϵ) to control the strength of the perturbation. The value of ϵ determines the trade-off between the strength of the attack and the perceptibility of the perturbation. To launch the attack, the scaled perturbation is added to the original input sample. This is achieved by element-wise addition while ensuring that the perturbed input is within the permissible range of values for the data type controlled by the maximum transmit power of the adversary. Given the DNN model with parameters θ, an input sample x, and its true label y, the objective of the untargeted adversarial attack is to find a perturbation δ that maximizes the loss function while satisfying certain constraints. This optimization is written asmax_δℒ(x+δ, y, θ)subject to * δ_p ≤ϵ_max: the magnitude of perturbation δ is upper-bounded by ϵ_max.* x+δ remains within the valid input range depending on the transmit power and phase shift of the device. As the non-convex optimization problem in (<ref>) is hard to solve, FGSM solves the optimization problem for an untargeted adversarial attack by linearizing the loss function with respect to the input perturbation. This linear approximation allows for an efficient computation of the perturbation that maximizes the loss. Specifically, the FGSM attack generates an adversarial example x_adv by perturbing the input in the direction that maximizes the loss function with respect to the true label. The loss function L(x, y, θ) is computed for the input x with respect to the true label y. The gradient of the loss function is computed with respect to the input as ∇_x L(x, y, θ). This gradient is normalized by taking the sign of its elements as sign (∇_x L(x, y, θ)). The sign of the gradient is scaled by a small value ϵ, typically referred to as the step size or perturbation magnitude. Then, the perturbation for the untargeted attack is computed as δ = ϵ sign(∇_x L(x, y, θ)), where the sign function extracts the sign of the gradient. The adversarial example x_adv is generated by adding the perturbation to the original input such that x_adv = x + ϵ sign(∇_x L(x, y, θ)).This perturbed signal x_adv is transmitted to fool the DNN model into making incorrect misclassification for any label. We define the classifier that classifies received signals as legitimate or rogue LoRa device as `Classifier 1', and the classifier that classifies received (legitimate or rogue) signals as LoRa Device 1 or Device 2 as `Classifier 2'. The attack success probability is the probability of misclassifying input signals without specifying any target label. We evaluate the attack success probability as a function of the perturbation-to-signal ratio (PSR) that determines the upper bound on the perturbation magnitude, ϵ. Fig. <ref> and Fig. <ref> show the attack success probability for the untargeted attack on CNN-based Classifier 1 and Classifier 2, respectively. Fig. <ref> and Fig. <ref> show the attack success probability for the untargeted attack on FNN-based classifier 1 and 2, respectively. We consider three cases of perturbation: * Perturbation for Classifier 1: Perturbation is determined to maximize the loss of Classifier 1 according to (<ref>).* Perturbation for Classifier 2: Perturbation is determined to maximize the loss of Classifier 2 according to (<ref>).* Perturbation for Classifier 1+2: Perturbation is determined to maximize the weighted loss of Classifiers 1 and 2 according toδ = ϵ sign( ∑_i=1^2 w_i ∇_x L_i(x, y_i, θ_i) ), where w_i is the weight for Classifier i = 1,2 (such that 0 ≤ w_i ≤ 1 and w_1+w_2= 1) and L_i is the loss function for classifier i with parameters θ_i. For numerical results, we set w_1=w_2=0.5.We also evaluate the effectiveness of using Gaussian noise as perturbation signal as the benchmark. In this case, random perturbations are included to the signal independent of the input samples.When launching an untargeted attack on a classifier using a perturbation designed specifically for that same classifier, the attack success probability is high for both Classifier 1 and 2. The best attack performance is achieved when the attack is launched on the CNN-based Classifier 1 and the perturbation is determined specifically for that classifier. However, if there is a mismatch between the attacked model and the model for which the perturbation was generated, the attack success probability diminishes. The extent of this decrease varies depending on the classifier under attack (with a substantial drop for Classifier 1 and a smaller drop for Classifier 2). With a hybrid attack (on `Classifier 1+2'), the effects of this mismatch are balanced in terms of attack transferability, enabling the adversary to employ a common perturbation that is broadcast (with a single transmission) to achieve high attack success against each classifier. In all cases, the use of Gaussian noise as perturbation signal for brute-force jamming is ineffective in reducing the classifier accuracy significantly. When we compare the performance of CNN classifier vs. FNN classifier, we observe that attack success probability is slightly higher for attacks designed against the CNN classifier. § CONCLUSION We addressed the security concerns associated with LoRa networks that provide long-range and low-power communication capabilities for IoT applications. We developed a deep learning framework for two tasks of device identification and classification of legitimate and spoofed signals in LoRa networks. By employing CNN or FNN as the DNN for these tasks and using real experimental I/Q data for LoRa signals, along with KDE for generating spoofed signals by rogue devices, we studied the effectiveness of FGSM-based untargeted adversarial attacks on these LoRa signal classification tasks. We showed that these attacks are highly effective in reducing the classifier accuracy especially when the perturbation is determined for the particular classifier under attack. In addition,we found out that a common perturbation can be effectively crafted by the adversary to achieve high attack success against each classifier simultaneously. Our results provided insights into the transferability of adversarial attacks, emphasized the vulnerability of LoRa networks to such attacks in IoT systems, and highlighted the need of defending against these attacks.IEEEtran | http://arxiv.org/abs/2312.16715v1 | {
"authors": [
"Yalin E. Sagduyu",
"Tugba Erpek"
],
"categories": [
"cs.CR",
"cs.AI",
"cs.LG",
"cs.NI",
"eess.SP"
],
"primary_category": "cs.CR",
"published": "20231227204928",
"title": "Adversarial Attacks on LoRa Device Identification and Rogue Signal Detection with Deep Learning"
} |
Computing Gerber-Shiu function in the classical risk model with interest using collocation method Zan Yu, Lianzeng ZhangCorresponding author. School of Finance, Nankai University, Tianjin 300350,China ===================================================================================================================The effectiveness of advertising in e-commerce largely depends on the ability of merchants to bid on and win impressions for their targeted users. The bidding procedure is highly complex due to various factors such as market competition, user behavior, and the diverse objectives of advertisers.In this paper we consider the problem at the level of user timelines instead of individual bid requests, manipulating full policies (i.e. pre-defined bidding strategies) and not bid values.In order to optimally allocate policies to users, typical multiple treatments allocation methods solve knapsack-like problems which aim at maximizing an expected value under constraints. In the industrial contexts such as online advertising, we argue that optimizing for the probability of success is a more suited objective than expected value maximization, and we introduce thealgorithm that aims at finding the policy allocation which is the most likely to outperform a fixed reference policy. Finally, we conduct comprehensive experiments both on synthetic and real-world datato evaluate its performance. The results demonstrate that our proposed algorithm outperforms conventional expected-value maximization algorithms in terms of success rate. § INTRODUCTIONOptimizing marketing effectiveness relies on using individualized bidding policies, exploiting the fact that each user responds differently. A policy may include a set of rules or actions over an extended period of time, e.g., cash bonuses, promotionand display ad shown toconsumers on online platforms. Without loss of generality, we take the narrow view of bidding for display advertising in order to ground our research into a real life application.In this context,the task at hand is to specify a full bidding strategy (the policy) on the future advertisement opportunities for each given users during a given time period.In practice, it is typical to have a fixed budget allocated to a campaign. From an advertising perspective, a bidding strategy must maximize the total expected revenue while ensuring that the expected total cost does not exceed a specified budget.Usually, this problem is modeled as a multiple choice knapsack problem <cit.> with the objective to select at most one item (bid policy) from each user such that the sum of the weights (expected cost) of selected items does not exceed the capacity (budget) while the total reward (expected revenue) is maximized. This problem is known to be NP-hard, although it can be tackled with mixed integer linear programming or through Lagrangian relaxation <cit.>.From a causal perspective, it is classical to consider every individual ad as a treatment, and the optimization problem goal is to maximize the total causal effect of these treatments by correctly assigning treatments to users. There exist various approaches for individual treatment assignment that differ by the objective function they optimize: learning models to predict either outcomes, causal effects or directly the optimal treatment assignment. <cit.> compare these approaches analytically and show that the assignment learners optimize the bias-variance tradeoff with respect to decision-making errors. Optimization at the opportunity –or bid –level, which we refer as bid by bid optimization, requires to attribute each observed reward to the action that actually caused it, e.g. each conversion must be attributed to a shown ad. This attribution problem is very complex as there usually are several ads displayed in the few hours preceding each conversion <cit.>. It causes fundamental problems in the estimation of the causal effects and makes the bid by bid optimization extremely difficult in practice. Furthermore, display advertising campaigns, like many other online systems, are operated under several business and technical constraints.In particular, it is typical for an advertising campaign to have a budget constraint. Several algorithms allow adaptingbid by bid optimization techniques tosuch constraints <cit.>.While these algorithms have their merits and are largely deployed in practice, they are,however,poorly suited for causal bid by bid methods. This is because (a) typical causal methods inherentlysuppose the absence of causal interaction between the treatment units — such assumption is in generalviolated when mixing causal method for bid by bid optimization and budget pacing; (b) the overall methodology needs to trade off marginal value and marginalfuture total cost <cit.>, which is arguably intractable at the bid level.Our first idea is to reformulate the problem at the user timeline level (i.e. considering all the bid requests and subsequent events relative to a user along a given time period) which implies to consider entire policies instead of individual bids. With this new formulation, the optimal policy allocation search is framed as a multiple treatment allocation problem, and the causal effects (cost and value) of policies are much easier to estimate than that of individual bids. Our approach is not to be understood as in competition with usual bid by bid design approaches <cit.> but rather complementary. Indeed, any bid by bid design approach could be included as one of the candidate policies we wish to choose from when allocating policies to users with our methodology. If a bid by bid design policy happens to be globally optimal, our method will simply conclude that the optimal policy allocation consists in assigning this policy to every user.However, we claim that searching for the policy allocation function which maximizes an expected value under an expected cost constraint (which is typically done in treatment allocation problems) is not always the best objective. In a large organization, it is often necessary to haveguidelines that allow for consistent decision-making regarding product design and improvements. Without such guidelines, individuals cannot handle trade-offs between different quantities (for example, quality and volume) consistently across the whole organization. One may think about designing medication (which should be efficient but also avoid negative side-effects) or electrical batteries (which should have a big enough capacity while not relying too much on rare materials). This leads to the definition of a success across organizations, e.g. in online advertising, it corresponds to increasing generated value without increasing the cost with respect to a reference outcome.Taking this as a premise, the (constrained) maximization of a single quantity –such as revenue –is not anymore the right criterion as it does not account for the uncertainty underlying the phenomenon at play, nor does it account for what will be considered a success. This motivates the focus on finding the policy allocation resulting in the highest probability of success. While every metric has its pros and cons, we believe a focus on success probability, with a very flexible notion of success, is of particular operational interest, see Fig. <ref> for an illustrative example (we refer to Section <ref> for a detailed description). In summary, this work presents the following contributions: * We formally propose the idea of framing the optimization problem at the policy level instead of focusing on bid by bid design, and mathematically formalize both the expected value maximization and the success probability maximization problems.* We develop a novel customized solution to address the specificities of the success probability optimization problem.* Finally, we present a series of numerical experiments which were conducted on both synthetic and real-world data, showing that our approach outperforms traditional value maximization methods in terms of success rate guarantees.every picture/.style=line width=0.75pt§ PROBLEM FORMULATION Preliminary considerations Throughout this section, we will implicitly refer to a given time period τ of length Δ t, i.e. τ = [t_0, t_0 + Δ t]. We consider a given advertiser Adv who has a fixed budget C to spend over period τ.Set of candidate policies We assume given a set of K candidate policies Π = {π_0, π_1, …, π_K-1} each encapsulating bidding strategies that may be applied by Adv to each user consistently throughout the period τ. The reference policy π_0 is the default bidding strategy used by Adv (typically corresponding to the strategy which is already rolled out in production for this advertiser). This set of policies Π can be thought of as a collection of potential treatments in a multiple treatment allocation problem.Note that we do not consider treatments at the level of bidding opportunities here, but at the level of an extended time period, during which we apply policies –or bidding strategies –which each have an integrated way to decide on how to bid on each user for all the opportunities that will arise during period τ. Random variables and potential outcomes Considering the above setup, and with respect to any given user u targetable by Adv, we define the following random variables:* 𝐗∈𝒳⊂ℝ^d contains a snapshot of the features of u captured at time t_0,* 𝐘 = (Y^v, Y^c) ∈𝒴⊂ℝ_+^2 contains respectively the value generated by u in favor of Adv during period τ and the cost Adv spent to advertize to u. For any π∈Π, we denote 𝐘(π) = (Y^v(π), Y^c(π)) the potential outcomes <cit.> we would have observed had π been applied to u during τ. In what follows, we consider the tuple (𝐗, 𝐘(π_0), …, 𝐘(π_K-1)) has an underlying probability distribution ℙ, which we will overload by simplicity to designate its marginals and conditionals. All expectancy notations 𝔼 will refer implicitly to ℙ. Factuals and counterfactuals Assuming that we apply policy π_u∈Π to a given user u during τ, we denote 𝐲_u = (y^v_u, y^c_u) the corresponding realization of the outcome variable 𝐘, and {𝐲_u(π)}_π∈Π the corresponding realizations of the potential outcomes variables {𝐘(π)}_π∈Π. In that case, 𝐲_u = 𝐲_u(π_u) is called the observed factual outcome and the {𝐲_u(π_u)}_π∈Π∖{π_u} are the un-observed counterfactual outcomes. Population random variables Let 𝒰 = {1, …, N} be the set of users who are targetable by Adv during period τ. We have access to a randomized controlled trial (RCT) –in this case also called an online controlled experiment or A/B test –on population 𝒰 during this period, randomly assigning to each of those N users the K potential policies in Π. Formally, let {K_u}_u∈𝒰 be N i.i.d. uniform categorical variables with values in {0, …, K-1}. Each u∈𝒰 is assigned to the policy π_K_u during τ, resulting in the definition of the collection {(𝐗_u, 𝐘_u)}_u∈𝒰, where the (𝐗_u, 𝐘_u) = (𝐗_u, 𝐘_u(π_K_u)). We will denote ℙ[We drop the reference to 𝒰 in ℙ for simplicity.] the probability distribution of {(𝐗_u, 𝐘_u)}_u∈𝒰. Lastly we assume that, in expectation, Adv exactly spends their advertising cost budget C during τ had they assigned the default policy π_0 to every user, i.e.𝔼[∑_u∈𝒰Y^c_u(π_0)]= C. §.§ Expected value maximization problem§.§.§ At the user levelIn the setup we introduced, one aim may be to find an optimal policy allocation, i.e. a mapping from 𝒳 to policies from Π so that expected total value generated in favor of Adv is maximized, while respecting (in expectation) the total budget constraint.Formally, we are looking for a solution ϕ^*:𝒳→Π to the problem:ϕ∈Π^𝒳max𝔼[∑_u∈𝒰Y_u^v(ϕ(𝐗_u))]s.t. 𝔼[∑_u∈𝒰Y_u^c(ϕ(𝐗_u))] ≤ C.In practice, Π^𝒳 is very large and hard to explore efficiently, making (<ref>) a difficult problem, especially since it involves estimation of K potential outcomes in parallel. A crucial observation is that the problem may be simplified by reducing it to a partition of the space 𝒳, which leads to a reparametrization of (<ref>), as explained in the next subsection.§.§.§ Assuming a given partitioning of the user space We consider given a partition function γ:𝒳→𝒢 where 𝒢 = {1, …, M} contains the indexes of the partition components (or buckets). Reasoning at a bucket-level instead of the user-level is practical in causal estimation setups since it enables to circumvent the fundamental problem of causal inference <cit.> and is more compliant with privacy restrictions <cit.>. A reasonable partitioning can be chosen by the domain knowledge or recursively with causal trees through heterogeneous treatment effect estimation <cit.>. Given the partition function γ, we propose to simplify problem (<ref>): instead of searching through all ϕs in Π^𝒳, we restrict our search to the allocations of the form ψ∘γ where ψ∈Π^𝒢. In short, we look for allocation functions that assign all users belonging to the same bucket g∈𝒢 to the same policy π∈Π.Formally, this leads to the reparametrized problem, where we are looking for a solution ψ^* : 𝒢→Π to the problem:ψ∈Π^𝒢max𝔼[∑_u∈𝒰Y_u^v(ψ(G_u))]s.t. 𝔼[∑_u∈𝒰Y_u^c(ψ(G_u))] ≤ C.where G_u=γ(𝐗_u) for all u∈𝒰.§.§.§ Solving the expected value maximization problemThe value expectation maximization problem formalized in (<ref>) may be solved using mixed integer linear programming or Lagrangian relaxation approaches, which make the problem tractable in practice despite being NP-Hard <cit.>. Nevertheless, the knapsack formulation remains a proxy to the marketing problem and its solution does not always align with the business goal.Remark 1 –mix of A and B rollout If number of buckets M = 1, we assign all users to the same policy. This corresponds to a typical rollout decision in an online advertising company: we are A/B testing multiples policies, then depending on the results choosing which one should be rolled out. Our setup allows for a rollout of a mix of tested policies, given by function ψ.Remark 2 –relaxing the allocation space We can relax problems (<ref>) (<ref>) by allowing for soft allocations, i.e. mappings from 𝒳 (resp. 𝒢) to Δ = Δ (Π) where Δ denotes all categorical distributions with values in Π:Δ := {(p(k))_k∈ 1, K-1∈ [0,1]^Ks.t. ∑_k p(k) = 1}.For ψ∈Δ^𝒢 and g∈𝒢 and for convenience of notations, we will refer to the kth component of ψ(g) –i.e. the probability for ψ to assign a user in bucket g to policy π_k –as ψ(g, k). §.§ Success probability maximization problemIn this section, we will focus on the case where we are given a partitioning of the user space and consider more general soft allocation setup presented in Remark 2 at the end of the previous section.Instead of searching for the allocation that maximizes the expected value under constraint as in (<ref>) and (<ref>), one can also be interested in maximizing their success probability, especially in cases where the variance of the variables at play is high. For instance, a policy ψ^* that satisfies (<ref>) might deliver very bad values occasionally. As explained in the introduction, the risk-aversion of industrial players often motivates them to prefer reliable small-increments to uncertain substantial ones.Instead, we suppose there is an agreement beforehand on the definition of the success of a given policy allocation function ψ: 𝒢→Δ through the characterization of a convex region 𝒮⊂𝒴 such that “ψ is successful on a the set of users 𝒰” is equivalent to ∑_u∈𝒰𝐘_u(ψ) ∈𝒮, where for any ψ∈Δ^𝒢 we denote by simplicity 𝐘(ψ) := 𝐘(ψ(γ(𝐗)).The success probability maximizing policy ψ^* is therefore a solution toψ∈Δ^𝒢maxℙ(∑_u∈𝒰𝐘_u(ψ) ∈𝒮) =ψ∈Δ^𝒢max[𝕀_𝒮(∑_u∈𝒰𝐘_u(ψ)) ],where 𝕀_𝒮 is the indicator function of the success set 𝒮.Example Our problem is defined with respect to any convex success region 𝒮⊂𝒴. In practice, we will consider success regions relative to a fixed 𝐲_0 = (y^v_0, y^c_0), of the form𝒮_𝐲_0 = {(y^v, y^c) ∈𝒴 s.t. y^v > y^v_0and y^c ≤ y^c_0 },where 𝐲_0 should be interpreted as a reference outcome, for example the outcome we observe if we assign the reference policy to every user ∑_u∈𝒰𝐘_u(π_0). The success region 𝒮_𝐲_0 corresponds to all outcomes with an increased value and decreased cost with respect to the reference value and cost 𝐲_0.In Figure <ref>, 𝒮_𝐲_0 is displayed in green and its complementary 𝒮̅_𝐲_0 in red. We represent the distributions of the outcome 𝐘 for the respective allocations output by (i) the possible solution to (<ref>) (maximization of the success probability) in orange and (ii) the possible solution to <ref> (maximization of 𝔼[Y^v] under the condition 𝔼[Y^c] ≤ y^c_0) in blue. The orange outcome has a very high probability to be in 𝒮_𝐲_0, even if it generates a bit less value on average than the blue one, which presents a high risk of being outside of the success region (for example by breaking the cost constraint). § THEALGORITHM In this section, we present solutions for the problems (<ref>) and (<ref>). We will focus on the bucket-level versions of these problems, and therefore assume given a fixed partitioning γ:𝒳→𝒢={1, …, M} of the feature space. This function could have been given by an expert or learned by machine learning algorithm, but it is not the focus of this work. §.§ Gaussian parametrization of the problemIn this subsection, we introduce a novel method to solve the success probability maximization problem. This optimization problem, stated in (<ref>), presents several non-trivial difficulties:(a) the indicator function which expectancy we are maximizing is not continuous on Δ^𝒢 and (b) the criteria we wish to maximize is non-concave.We use 𝐘_g,k = ∑_u∈𝒰𝐘_u(π_k)𝕀(γ(𝐗_u) = g),as a compact notation for the total expected outcome from users in bucket g, had they been allocated to policy π_k.Assuming that the buckets in 𝒢 are approximately balanced in size (each containing ≈ N/M data points), we observe around N / (M K) i.i.d. realizations to estimate each 𝐘_g,k. Let _k,g and _k,g be the mean and covariance matrix of the potential outcomes which contain value and cost.For any soft allocation ψ: 𝒢→Δ –which maps all buckets in 𝒢 to a stochastic mix of policies in Π–the total expected outcome under allocation ψ𝐘(ψ) = ∑_k ∑_g ψ(g, k) 𝐘_g,k, ∑_k ψ(g, k) = 1.The distributions ψ(g, k) 𝐘_g,k are independent, therefore, we can use the Lyapunov central limit theorem and approximate the total expected outcome by a Gaussian distribution𝐘(ψ) ∼𝒩((ψ),(ψ)),where (ψ) = ∑_k∑_gψ(g, k) _g,k and (ψ) = Var[∑_k∑_gψ(g, k) Y_g,k]. Depending on assumptions, (ψ) can be a linear or quadratic function of ψ(g, k) due to different sources of randomness which lead to different variances. We assume that (ψ) = ∑_k∑_gψ(g, k) _g,k (more details in Supplementary). §.§ Parameters estimation When we do not have a direct access to parameters _g, k and _g, k, we need to estimate them. In practice, the parameters are estimated on a randomized control trial (RCT) dataset 𝒟 = {(𝐱_u, 𝐲_u)}_u∈𝒰 –realization of the collection {(𝐗_u, 𝐘_u)}_u∈𝒰 introduced in the last section. More precisely, (𝐱_u, 𝐲_u) = (𝐱_u, 𝐲_u(π_k_u)) are i.i.d. realizations of (𝐗, 𝐘(π_k_u)), where {k_u}_u∈𝒰 are i.i.d. realization of a uniform categorical variable on {0, …, K-1}. For k∈ 0, K-1 and g∈𝒢, we will refer to the restrictions of 𝒟 to points u∈𝒰 for which γ(𝐱_u)=g and k_u = k as 𝒟_g,k.To estimate parameters, we choose mean and variance estimation methods (e.g. bootstrapping <cit.>) which take as input a dataset 𝒟_g,k containing realizations of 𝐘 for a given bucket g and policy k and return respectively its mean{_g, k} and variances {_g, k}. §.§ Gradient computationIn the following, for ψ∈Δ^𝒢, we will denote for clarity purposes 𝒞(ψ) = ℙ(∑_u∈𝒰𝐘_u(ψ)∈𝒮) = [𝕀_𝒮(∑_u∈𝒰𝐘_u(ψ)) ] the criterion we wish to optimize. The indicator function is discontinuous on the border of 𝒮.It prevents us from directly using a stochastic gradient method <cit.>.The next lemma [The proof of Lemma <ref> uses classical arguments from the policy learning literature <cit.> and further relies on the chain rule with a few relations for multivariate Gaussian variables. We defer the proof to the Supplementary. ] provides an explicit expression for the gradient of the criteria.The gradient of 𝒞at ψ satisfies[∇𝒞(ψ)]_g,k =[𝕀_𝒮(𝐘) ( (𝐘-(ψ))^(ψ)^-1·_g,k -1/2((ψ)-(𝐘-(ψ))(𝐘-(ψ))^)·(ψ)^-1_g,k(ψ)^-1)].§.§ OptimizationHere, we present our optimization algorithm(Algorithm <ref>) to solve (<ref>) which takes as input(a) success region𝒮⊂𝒴 to define a criteria 𝒞 introduced in (<ref>). We typically consider success regions relative to a reference outcome (y_0^v, y_0^c): it might be defined as all outcomes corresponding to increased value and decreased cost with respect to the reference value and cost; (b) estimated mean and variance values {_g, k} and {_g, k} for all pairwise couples of groups g and candidate policies k. There is a particular case when the exact values of mean and variances are known and do not require estimation;(c) some hyperparameters such as an initial policy allocation function ψ_0∈Δ^𝒢, number of steps n_st and learning rate η. The algorithm performs a gradient ascent ∇∇̂𝒞(ψ) which can be computed using the formula from Lemma <ref> and a numerical integration method for computing the expectation _ψ, e.g. a Monte-Carlo approach.The updated gradient is, then, projected onto the space of metapolicies Δ^𝒢 to produce a solution candidate for (<ref>) using a method from <cit.>. We provide several possible improvements of Algorithm <ref> in Supplementary.Remark As the computation of the gradient through the closed-form expression requires a matrix inversion, it is not always the best option computationally. This is the case for thesuccess region proposed in subsection <ref>, for which we observe thatthe criterion rewrites[𝕀_𝒮(∑_u∈𝒰𝐘_u(ψ) ) ] = cdf_Y^c(y^c_0) - cdf_𝐘(𝐲_0),where cdf_Y_c(y^c_0) is a (univariate) c.d.f. of Y_c in y^c_0 and cdf_𝐘(𝐲_0) is a (bivariate) c.d.f. of 𝐘 in 𝐲_0. (see appendix for the definition of bivariate c.d.f.). To speed up the algorithm,we rely on an approximation of the bivariate c.d.f. based on the error function <cit.> to estimate cdf_𝐘(𝐲_0), and then implement it in JAX – this way we can directly use the automatic differentiation in JAX to numerically approximate the gradient of cdf_Y^c(y^c_0) - cdf_𝐘(𝐲_0).§ EXPERIMENTAL RESULTSFor all experiments below we use JAX framework <cit.> for the numerical estimation of the criterion's gradient, utilizing automatic differentiation within JAX instead of explicitly calculating the gradient and integrating it over an outcome. Hyperparameters used for the methods are provided in Supplementary material and source code[https://github.com/criteo-research/success-proba-max] is published to reproduce all the empirical results. §.§ DatasetsBesides the synthetic setups, which will be described below, we test algorithm on two large-scale, real world datasets. * CRITEO-UPLIFT v2 <cit.> is provided by the AdTech company Criteo. Data contains 13.9 million samples which are collected from several incremental A/B tests. It includes 12 features, 1 binary treatment and 2 binary outcome labels ("visit" and "conversion"). Following <cit.>, we use "visit" label as proxy of the cost and "conversion" as the value.For the buckets, we used quantile bins of the "f0" feature. Finally, we randomly partitioned dataset into two equal parts for train and test.Preprocessing details are in Supplementary.* Private dataset is constructed from a large-scale real-time bidding RCT.One feature was chosen based on an expert knowledge, buckets were created then as quantile-based projections of the feature.Dataset is aggregated over 70 days and consists of 9 buckets, 3 bidding policies (with reference) and 100 bootstraps of values and associated costs for each pair (bucket, policy).Remaining details along with aggregated datasets for one- and two-dimensional outcome cases are available in Supplementary material.§.§ One-dimensional outcomeHere we assume an outcome 𝒴∈ℝ. Problem is parameterized by a difficulty level r so that 𝒮={(r,+∞)}.We present here results for synthetic data. Private data results are in Supplementary. Baselinesis compared to several baselines searching for the optimal policy allocation: * ({_g,k}, {_g,k}, 𝒮) method that compares all possible hard allocations and for a given difficulty level returns allocation that maximises criterion; * ({_g,k}) algorithm that returns the policy with the maximum mean value per bucket. §.§.§ Synthetic data generationWe generate Gaussian distributions for two cases: (i) "large variance" and(ii) "small variance", the same setup but the relative difference between the variances is much smaller – we expect the latter problem be harder than the former for the algorithms that take into account the variance.See Table <ref> for precise parameters of distributions (data construction details and illustration of policy distributions per bucket are provided in Supplementary).Results We firstly fix μ_g, k, Σ_g, k and use them directly in the algorithm to avoid a source of randomness arising from parameters estimation, we provide the results below. Then we generate normal distributions with parameters μ_g, k, Σ_g, k and use estimations μ̂_g, k, Σ̂_g, k in the algorithm – corresponding results are presented in Supplementary. On Fig. <ref> (left) we show how our method performs for easier problem with large variance with varying difficulty level r.starts from uniform allocation ψ_0^unif and performs same as . The key reasons why our algorithm beatsis that we i) directly optimize metric of interest and that we ii) effectively incorporate variance into optimization, whileonly operates with means. Fig. <ref> (right) shows results for the small variance case. Firstly, note that the gain over(in the region r ∈ [5,5.5]) is drastically smaller than in the previous case. Then, at difficulty level r > 6 the performance of our algorithm drops down. This is because criterion value 𝒞(ψ_0^unif) becomes 0 and the gradient is not updated. To overcome the problem, we can either "warm-start" from the baseline policy (e.g. fromone) or to explore, by estimating the criterion for several random initial allocations.§.§ Two-dimensional outcome In this case we consider an outcome 𝒴∈ℝ^2, so 𝐘 = (Y^v, Y^c). Problem is parameterized by two-dimensional difficulty levels r so that 𝒮={(r_v,+∞), (-∞, r_c]}.Baselines In addition to ,is compared with two other baselines that search for the optimal policy allocation: * ({_g,k}, r_c) algorithm (linear programming) that solves the fractional knapsack problem and returns a policy (soft allocation) with the maximum mean value per bucket;* ({_g,k}, r_c) algorithm (mixed-integer linear programming) that solves the 0/1 knapsack problem and returns a policy (hard allocation) with the maximum mean value per bucket. §.§.§ Synthetic data generation We generate bivariate Gaussian distributions for cases (i) and (ii), see Table <ref> for precise parameters of distributions (an illustration of policy distributions is provided in Supplementary). We firstly fix μ^v_g, k, Σ^v_g, k and μ^c_g, k, Σ^c_g, k, and use them directly in the algorithm to avoid a source of randomness arising from parameters estimation (results for the case with parameters estimation are presented in Supplementary).Two-dimensional outcome: results. For the experiment, we fix r_v = 0 and vary r_c only. Fig. <ref> shows that forboth cases,started from uniform allocation ψ_0^unif reaches the same performance as .§.§.§ Private datasetIn our first experiment, we fix r_c = 0 and vary r_v only - so we check if we can increase total value while having same total cost as for reference. We then repeat computations, but now we fix r_v = 0 and vary r_c only - in this case we wonder how often we can reach at least total value of the reference policy while changing total cost (this case is described in Supplementary). Results Fig. <ref> describes results on the private dataset with two-dimensional outcome for the range of Gain r_v while r_c = 0 for train (left) and test (right) splits. Our algorithm, initialized with ψ_0 from exploration, reach the Gain of 0.01 in value (1% over the reference) with probability 0.7 for train and 0.4 for test, while forrespective probabilities are 0.35 and 0.1. Note thatmight not be the best here as some soft allocation may outperform hard ones for particular (r_v, r_c).§.§.§ CRITEO-UPLIFT v2Data contains 2 policies including reference ("control"), so value can be increased only by increasing the cost. Thus, now we vary both r_v and r_c from 0 to 0.2, and a trade-off between value and cost is expected.Results Fig. <ref> depicts differences in 𝒞(ψ) between our algorithm and best baseline(absolute values are provided in Supplementary). Firstly, there is indeed a trade-off - for increasing cost by x% value increases by roughly 2x%. In addition, our algorithm reach higher 𝒞(ψ) in several regions (e.g. where r_c ∈ [0.03, 0.04] and r_v ∈ [0.04,0.08] or where r_c ∈ [0.08, 0.1] and r_v ∈ [0.1,0.16]).The results correspond to the sketch provided in Fig. <ref>. It represents the distributions of the outcome 𝐘 for the respective allocations output by (i)in orange and (ii)in blue.outputs a solution for which the outcome has a very high probability to be in 𝒮_𝐲_0 with 𝐲_0 = (y_0^v, y_0^c) = (r_v, r_c), even if generates a bit less value on average than , which presents a high risk of being outside of the success region (for example by breaking the cost constraint).§ RELATED WORKSome recent papers address multiple treatment allocation problem under budget constraints from different perspectives. The standard two-stage method firstly estimates treatment effects to predict value and cost for each user, then solves a knapsack problem <cit.>. Nevertheless, the goals of two-stage approaches and real-world scenarios do not perfectly align. <cit.> proposes a two-stage method with an addition regularizer to the knapsack problem loss to address a business goal. However, the regularizer requires a mathematically well-defined function (such as expected outcome metric) and its gradients estimation.Applying the decision-focused framework for marketing problems under budget constraints, <cit.> propose a rank method by comparing learned ratios between values and costs for the aggregated targeted treatment effect to improve user retention problem. However, <cit.> show that the suggested loss function cannot converge to a stable extreme point in theory and improve the framework. Authors limit the treatments to different levels of one treatment, e.g. different levels of discount of some products. Further, they develop an algorithm equivalent to the Lagrange dual method ('greedy' approach) but based on learning to rank decision factors for multiple choice knapsack problem solutions. In our current context, our focus is solely on the top-ranked action, rather than the complete ranking itself. Moreover, as we discussed earlier, the knapsack formulation remains a proxy to our problem, so finding efficiently the best decision factors is still not equivalent to finding the best solution to the final business goal. The closest to our work, <cit.> suggest to reformulate the treatment allocation problem as a stochastic optimization task assuming normally distributed outcomes of bucket-level objective and constraints, however, the final problem remains in the knapsack form.§ CONCLUSION AND FUTURE WORKS We suggested a new formulation of the policy allocation problem that is better adapted to some downstream tasks when the success region is clearly identified. Compared to greedy approaches, our algorithm directly optimizes metric of interest and effectively utilizes variance in the optimization, while greedy ones only operate with means.Moreover, the proposed method can be efficiently applied to improve the given baseline policy.Further works include a theoretical analysis of the algorithm. In particular, how it behaves numerically when the dimension of the outcome increases.Also, it is important to understand the relationship between the means and variances of potential outcomes that makes the proposed methodoutperform the greedy approaches.In addition to several suggested improvements of Algorithm <ref>, promising direction should be to couple the choice of user partitioning and the policy allocation problem into one master problem.Last, given that outcomes on different user segments may correlate, adapting the framework for Bayesian learning seems a pragmatic avenue for further research. § ACKNOWLEDGMENTS We would like to thank David Rohde and Eustache Diemert for their feedback and ideas during the project. 33 [Ai et al.(2022)Ai, Li, Gong, Yu, Xue, Zhang, Zhang, and Jiang]ai2022lbcf Ai, M.; Li, B.; Gong, H.; Yu, Q.; Xue, S.; Zhang, Y.; Zhang, Y.; and Jiang, P. 2022. LBCF: A Large-Scale Budget-Constrained Causal Forest Algorithm. In ACM Web Conference.[Albert and Goldenberg(2022)]albert2022commerce Albert, J.; and Goldenberg, D. 2022. E-Commerce Promotions Personalization via Online Multiple-Choice Knapsack with Uplift Modeling. In ACM International Conference on Information & Knowledge Management.[Athey and Imbens(2016)]athey2016recursive Athey, S.; and Imbens, G. 2016. Recursive partitioning for heterogeneous causal effects. National Academy of Sciences, 113(27): 7353–7360.[Betlei et al.(2021)Betlei, Gregoir, Rahier, Bissuel, Diemert, and Amini]betlei2021differentially Betlei, A.; Gregoir, T.; Rahier, T.; Bissuel, A.; Diemert, E.; and Amini, M.-R. 2021. Differentially Private Individual Treatment Effect Estimation from Aggregated Data. PPML Workshop.[Bompaire, Désir, and Heymann(2021)]bompaire2021robust Bompaire, M.; Désir, A.; and Heymann, B. 2021. Robust label attribution for real-time bidding. arXiv preprint arXiv:2012.01767.[Bompaire, Gilotte, and Heymann(2021)]10.1145/3447548.3467280 Bompaire, M.; Gilotte, A.; and Heymann, B. 2021. Causal Models for Real Time Bidding with Repeated User Interactions. In ACM SIGKDD Conference on Knowledge Discovery & Data Mining.[Bradbury et al.(2018)Bradbury, Frostig, Hawkins, Johnson, Leary, Maclaurin, Necula, Paszke, VanderPlas, Wanderman-Milne, and Zhang]jax2018github Bradbury, J.; Frostig, R.; Hawkins, P.; Johnson, M. J.; Leary, C.; Maclaurin, D.; Necula, G.; Paszke, A.; VanderPlas, J.; Wanderman-Milne, S.; and Zhang, Q. 2018. JAX: composable transformations of Python+NumPy programs.[Castiglioni et al.(2022)Castiglioni, Celli, Marchesi, Romano, and Gatti]castiglioni2022unifying Castiglioni, M.; Celli, A.; Marchesi, A.; Romano, G.; and Gatti, N. 2022. A Unifying Framework for Online Optimization with Long-Term Constraints. arXiv preprint arXiv:2209.07454.[Conitzer et al.(2022)Conitzer, Kroer, Sodomka, and Stier-Moses]doi:10.1287/opre.2021.2167 Conitzer, V.; Kroer, C.; Sodomka, E.; and Stier-Moses, N. E. 2022. Multiplicative Pacing Equilibria in Auction Markets. Operations Research, 70(2): 963–989.[Dalessandro et al.(2012)Dalessandro, Perlich, Stitelman, and Provost]10.1145/2351356.2351363 Dalessandro, B.; Perlich, C.; Stitelman, O.; and Provost, F. 2012. Causally Motivated Attribution for Online Advertising. In International Workshop on Data Mining for Online Advertising and Internet Economy (AdKDD).[Demirović et al.(2019)Demirović, Stuckey, Bailey, Chan, Leckie, Ramamohanarao, and Guns]demirovic2019investigation Demirović, E.; Stuckey, P. J.; Bailey, J.; Chan, J.; Leckie, C.; Ramamohanarao, K.; and Guns, T. 2019. An investigation into prediction+ optimisation for the knapsack problem. Integration of Constraint Programming, Artificial Intelligence, and Operations Research.[Diamond and Boyd(2016)]diamond2016cvxpy Diamond, S.; and Boyd, S. 2016. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83): 1–5.[Diemert et al.(2021)Diemert, Betlei, Renaudin, Amini, Gregoir, and Rahier]diemert2021large Diemert, E.; Betlei, A.; Renaudin, C.; Amini, M.-R.; Gregoir, T.; and Rahier, T. 2021. A large scale benchmark for individual treatment effect prediction and uplift modeling. arXiv preprint arXiv:2111.10106.[Du, Lee, and Ghaffarizadeh(2019)]du2019improve Du, S.; Lee, J.; and Ghaffarizadeh, F. 2019. Improve User Retention with Causal Learning. In ACM SIGKDD Workshop on Causal Discovery.[Duchi et al.(2008)Duchi, Shalev-Shwartz, Singer, and Chandra]duchi2008efficient Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; and Chandra, T. 2008. Efficient projections onto the l1-ball for learning in high dimensions. In International Conference on Machine Learning.[Efron(1979)]10.1214/aos/1176344552 Efron, B. 1979. Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics, 7(1): 1 – 26.[Fernández-Loría et al.(2022)Fernández-Loría, Provost, Anderton, Carterette, and Chandar]fernandez2022comparison Fernández-Loría, C.; Provost, F.; Anderton, J.; Carterette, B.; and Chandar, P. 2022. A comparison of methods for treatment assignment with an application to playlist generation. Information Systems Research.[Ji and Wang(2017)]Ji_Wang_2017 Ji, W.; and Wang, X. 2017. Additional Multi-Touch Attribution for Online Advertising. AAAI Conference on Artificial Intelligence.[Kleber(2019)]kleber2019turtledove Kleber, M. 2019. Turtledove.[Levine et al.(2020)Levine, Kumar, Tucker, and Fu]levine2020offline Levine, S.; Kumar, A.; Tucker, G.; and Fu, J. 2020. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643.[Moriwaki et al.(2021)Moriwaki, Hayakawa, Matsui, Saito, Munemasa, and Shibata]moriwaki2021real Moriwaki, D.; Hayakawa, Y.; Matsui, A.; Saito, Y.; Munemasa, I.; and Shibata, M. 2021. A Real-World Implementation of Unbiased Lift-based Bidding System. In IEEE International Conference on Big Data.[Rubin(1974)]rubin1974estimating Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5): 688.[Schulman et al.(2015)Schulman, Levine, Abbeel, Jordan, and Moritz]schulman2015trust Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. 2015. Trust region policy optimization. In International conference on machine learning, 1889–1897. PMLR.[Shapiro, Dentcheva, and Ruszczynski(2021)]shapiro2021lectures Shapiro, A.; Dentcheva, D.; and Ruszczynski, A. 2021. Lectures on stochastic programming: modeling and theory. Society for Industrial and Applied Mathematics.[Sinha and Zoltners(1979)]sinha1979multiple Sinha, P.; and Zoltners, A. A. 1979. The Multiple-Choice Knapsack Problem. Operations Research, 27(3): 503–515.[Sutton and Barto(2018)]sutton2018reinforcement Sutton, R. S.; and Barto, A. G. 2018. Reinforcement learning: An introduction. MIT press.[Tsay, Ke et al.(2011)]tsay2011simple Tsay, W.-J.; Ke, P.-H.; et al. 2011. A simple approximation for bivariate normal integral based on error function and its application on probit model with binary endogenous regressor. Technical report, Institute of Economics, Academia Sinica, Taipei, Taiwan.[Tu et al.(2021)Tu, Basu, DiCiccio, Bansal, Nandy, Jaikumar, and Chatterjee]tu2021personalized Tu, Y.; Basu, K.; DiCiccio, C.; Bansal, R.; Nandy, P.; Jaikumar, P.; and Chatterjee, S. 2021. Personalized treatment selection using causal heterogeneity. In ACM Web Conference.[Wager and Athey(2018)]wager2018estimation Wager, S.; and Athey, S. 2018. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523): 1228–1242.[Williams(1992)]williams1992simple Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Reinforcement Learning, 5–32.[Yan et al.(2023)Yan, Wang, Zhou, Lin, and Jiang]yan2023end Yan, Z.; Wang, S.; Zhou, G.; Lin, J.; and Jiang, P. 2023. An End-to-End Framework for Marketing Effectiveness Optimization under Budget Constraint. arXiv preprint arXiv:2302.04477.[Zhao et al.(2019)Zhao, Hua, Yan, Zhang, Xu, and Yang]zhao2019unified Zhao, K.; Hua, J.; Yan, L.; Zhang, Q.; Xu, H.; and Yang, C. 2019. A unified framework for marketing budget allocation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1820–1830.[Zhou et al.(2023)Zhou, Li, Jiang, Zheng, and Wang]zhou2023direct Zhou, H.; Li, S.; Jiang, G.; Zheng, J.; and Wang, D. 2023. Direct Heterogeneous Causal Learning for Resource Allocation Problems in Marketing. AAAI Conference on Artificial Intelligence. § DISCUSSION ON DIFFERENT SOURCES OF VARIANCE There are different sources of randomness:* estimation variability: how close is the estimator 𝐘̂_g,k to the true value(when we do not have a direct access to );* allocation variability: when ψ is a soft allocation, the allocation of a user u∈ g to policy ∈𝒦 can be described by the categorical random variable P^ψ(g)∈𝒦 such that ℙ( P^ψ(g) = k ) = ψ(g, k). We call 𝒫 the filtration containing all the randomness from the variables P^ψ(g) for all g.* system stochasticity: for a fixed g and k, the quantityis itself a random variable which contains the randomness from the behaviour of users in g. We call 𝒴 the filtration containing all randomness from the variablesfor all g, k. In our work, we do not consider the problem of estimation variability, but focus on the derivation of the variance of 𝐘 under a given soft allocation ψ. This variance [𝐘(ψ)] can be decomposed according to the variance with respect to the two sources of stochasticity respectively contained in 𝒴 and 𝒫.[𝐘(ψ)]= 𝔼_𝒫[ _𝒴[ | 𝒫] ] + _𝒫[ 𝔼_𝒴[| 𝒫] ] = 𝔼_𝒴[ _𝒫[ | 𝒴] ] + _𝒴[ 𝔼_𝒫[| 𝒴] ] Depending on assumptions, we can have different approximations of [𝐘(ψ)] as a linear or quadratic function of ψ. For example, consider the decomposition of the variance with respect to allocation randomness in (<ref>):𝔼_𝒫[ _𝒴[ | 𝒫] ]= ∑_k ∑_g ψ(g, k) _g, k,_𝒫[ 𝔼_𝒴[ | 𝒫] ]= ∑_g ψ(g, k) ( _g, k- ∑_k ψ(g, k) _g, k)^2.If we assume that there is minimal allocation variability, then _𝒫[ 𝔼_𝒴[ | 𝒫] ] << 𝔼_𝒫[ _𝒴[ | 𝒫] ] and the variance of 𝐘 depends linearly on ψ since[𝐘(ψ)] ≈𝔼_𝒫[ _𝒴[ | 𝒫] ]= ∑_k ∑_g ψ(g, k) _g, k.In practice, we assume that the allocation variability is small enough so that this variance approximation holds, i.e. we are capable of assigning a given ratio of the population to a given policy. Indeed, we observe that for all g,k there are enough users in g such that we have an (approximately) fixed proportion ψ(g,k) of users from g which are allocated to policy k. The quadratic dependence on allocation is harder to solve due to its non-concativity even in one-dimensional outcome case. I am not sure this subsection is at the right place. I would have had a separate section to discuss non convexity if needed. Can we have the same result for linear approx?Non-concativity of quadratic approximation. If we consider [𝐘(ψ)] = ∑_k ∑_g ψ(g, k)^2 _g, k, then our problem does not fall into the class of convex optimization problem because the criterion that we maximize might not be concave. For instance, one can consider the situation where 𝒳={1} contains only one element, 𝒴⊂ℝ, M = 1 and K=2; that is, one bucket, one dimensional outcome,two policies.Suppose we have μ_1,1=-5, μ_1,2=-1,Σ_1,1=8 and Σ_1,2=1, and𝒮=^+. Then an allocation ψ is fully determined by ψ_1,1.Problem consists in maximizing ψ_1,1↦ 1-cdf(ψ_1,1) where cdf is the cumulative distribution function of a Gaussian distribution of mean -ψ_1,1·5 -(1-ψ_1,1)· 1 = - 6 ψ_1,1 - 1 and varianceψ_1,1^2· 8 +(1-ψ_1,1)^2· 1 = 1 - 2 ψ_1,1 + 9 ψ_1,1^2. One can easily check that this function is not concave.§ GRADIENT DERIVATION (PROOF OF LEMMA 1) * We have∇𝒞(ψ) =(𝕀_𝒮() ∇_ψ(lnℓ(ψ, )) ).∇𝒞(ψ)= ∇∫_𝒮𝕀_𝒮() ℓ(ψ, ) = ∫_𝒮𝕀_𝒮() ∇ℓ(ψ, )= ∫_𝒮𝕀_𝒮() ∇_ψℓ(ψ, )/ℓ(ψ, )ℓ(ψ, )=∫_𝒮𝕀_𝒮() ∇_ψ(lnℓ(ψ, )) ℓ(ψ, )=(𝕀_𝒮() ∇_ψ(lnℓ(ψ, )) ) We need the derivatives of a Gaussian log-likelihood of with respect toits parameters.* Let p[,] be the probability density function of 𝒩(,), then ∇_^-1(lnp[,]))=1/2-1/2(-)(-μ)^ ,∇_(lnp[,]())=(-)^^-1. The proof is based on the rules of derivation by a vector and inversed matrix.* If f:t→ A(t) is an application fromto the set of non singular matrix of dimension d, then the derivative of g:t→ A^-1(t) is -g f' g. * Let ℓ(ψ, ) = p[(ψ),(ψ)](), then ( ∇_ψ(lnℓ(ψ, )))_g,k = (-(ψ))^(ψ)^-1·_g,k -1/2((ψ)-(-(ψ))(-(ψ))^T)·(ψ)^-1_g,k(ψ)^-1. Using the fact that(ψ)=∑_g,kψ(g,k) _g,k and (ψ)=∑_g,kψ(g,k) _g,k, it is clear that∂(ψ)/∂ψ(g,k) =_g,k, ∂(ψ)/∂ψ(g,k) = _g,k.Letℓ(ψ, ) = p[(ψ),(ψ)]() and index (g,k)∈ M× K, then,the chain rule and the previous steps lead to∂(lnℓ(ψ,))/∂ψ(g,k) = ∂(lnp[(ψ),(ψ)]())/∂ψ(g,k)=∂(lnp[(ψ),(ψ)]())/∂(ψ)∂(ψ)/∂ψ(g,k)+∂(lnp[(ψ),(ψ)]())/∂(ψ)^-1∂(ψ)^-1/∂ψ(g,k),where∂(lnp[(ψ),(ψ)]())/∂(ψ)∂(ψ)/∂ψ(g,k) =(-(ψ))^T(ψ)^-1_g,k, ∂(lnp[(ψ),(ψ)] ())/∂(ψ)^-1∂(ψ)^-1/∂ψ(g,k) = = 1/2((ψ)-(-(ψ))(-(ψ))^T) ·((ψ)^-1_g,k(ψ)^-1).* The gradient of index (g,k)∈ M× K is the following[∇𝒞(ψ)]_g,k =𝒞(ψ)·_ψ( (-(ψ))^(ψ)^-1·_g,k-1/2((ψ)-(-(ψ))(-(ψ))^T)·((ψ)^-1_g,k(ψ)^-1)|∈𝒮)Let M=1 and d=1, we get ∂_k 𝒞(ψ)/𝒞(ψ) =_ψ[ 𝕀_𝒮(y)( μ_k(y-μ(ψ))/Σ(ψ)-Σ_k/2(Σ(ψ)-(y-μ(ψ))^2)/Σ(ψ)^2) ]. § IMPROVEMENTS OF We identify several directions of howcan be improved. Firstly, gradient step may be accelerated, either by i) using second-order methods like Newton method, ii) by applying line search to adapt step size. Secondly, we observe that algorithm may be stuck in the "flat" regions, e.g. if the criterion value of the initial policy allocation equals 0 – this problem often appears in policy gradient methods in reinforcement learning <cit.>. Currently, we explore several random allocations to begin optimization from (akin to epsilon-greedy exploration in reinforcement learning) or "warm-start" from baseline policy, but there are more options to avoid this behaviour, e.g. forcing exploration by regularization.§ EXAMPLE OF ALIGNMENT Here, we provide an example of a problem whenandgive the same solution.Consider two policies π_0 and π_1 and users from 𝒰 with a population of size N. Let potential outcome Y_u(π_k) of user u follow Bernoulli distributions ℬ(p_k), where k ∈{0, 1}. Our goal is to maximize success ℙ( ∑_u Y_u (π_k_u) = r ), i.e. the probability of getting exactly r successes in N independent Bernoulli trials with parameters p_0 or p_1 depending on which policy π_k_u∈{π_0, π_1} is assigned to users u ∈𝒰. If parameters p_0 and p_1 are not known, we need to estimate them from the data. Let N_0 users be assigned policy π_0 and N_1 = N - N_0 users be assigned policy π_1. For each policy, we observe y(π_k) = ∑_u = 1^N_k y_u(π_k), a realization of Y(π_k) ∼Binom(N_k, p_k), where y_u(π_k) are sampled from ℬ(p_k). We estimate the Bernoulli probability for each policy as p̂_k = 1/N_k y(π_k).If we assume that there is no variance due to estimation variability, i.e. |p̂_k - p_k| ≈ 0, the total variance of Y after observing y = y(π_0) + y(π_1) is due to system stochasticity that consists of variance coming from the binomial distribution:Var[Y]= Var[Y(π_0)] + Var[Y(π_1)] = N_0 p̂_0 (1 - p̂_0) + N_1 p̂_1 (1 - p̂_1)We notice that the variance of the total outcome Var[Y] is monotone with respect to N_0 and N_1. In this example,will search for a trade-off between the policy with the minimum estimated Bernoulli variance_kminp̂_k (1 - p̂_k) and the maximum Bernoulli mean _kmaxp̂_k. Since _kminp̂_k (1 - p̂_k) = _kmaxp̂_k, we obtain that, in this example,returns the same solution as . § DATASETSHere we describe two real datasets used in the paper. §.§ Private DatasetData was created from a large-scale real-time bidding randomized control trial (RCT) over 70 days and consists of 3 labels. The main label is the value - originally binary variable of some user action. For each value we collected an associated cost. We used value and cost for two-dimensional experiments. Along with this, we approximated a revenue as a function of value and cost, which will be used for one-dimensional experiments. Data consists of 3 randomly (respecting the RCT procedure) assigned policies, {π_0, π_1, π_2}, where π_0 is a reference bidding policy used in production, π_1 and π_2 are candidate bidding policies. In order to separate the user-level feature space, one feature was chosen based on an expert knowledge, 9 buckets were created then as quantile-based projections of the feature.We aggregated labels by summarising them across the triplets (day, bucket, policy). Along with the sums, we computed 100 bootstraps of the aggregated value, cost and revenue, that will be used for the mean and (co-)variance estimations.In order to make a balanced yet realistic train/test split, we summed labels for odd (train) and even (test) calendar days, hence we got both train and test data aggregated over 35 days. To maintain data confidentiality, we computed a relative difference of labels with respect to the reference policy – for each pair (bucket, policy) we subtracted a value of the reference policy from the original one and divided it by a total reference value (a sum over buckets), we did the same for the cost and revenue. Finally, we used resulted bootstraps to estimate _g,k and _g,k.§.§ CRITEO-UPLIFT v2Dataset is provided by the AdTech company Criteo. Data contains 13.9 million samples which are collected from several incremental A/B tests. It includes 12 features, 1 binary treatment and 2 binary outcome labels ("visit" and "conversion"). Following <cit.>, we use "visit" label as proxy of the cost and "conversion" as the value. We randomly partitioned the dataset into two equal parts for train and test. For the buckets, we used quantile bins of the "f0" feature resulting in 8 buckets. § ONE-DIMENSIONAL OUTCOME §.§ CriterionHere we assume an outcome 𝒴∈ℝ. The problem is parameterized by a difficulty level r so that 𝒮={(r,+∞)}.Criterion then is defined as maximizing the following probabilityℙ(Y(ψ) > r) = [𝕀_𝒮(∑_u∈𝒰 Y_u(ψ) ) ] = 1 - cdf_Y(ψ)(r),where Y(ψ) = ∑_k∑_gψ(g, k) Y_g, k∼𝒩(μ(ψ), Σ(ψ)),μ(ψ) = ∑_k∑_gψ(g, k) μ_g, k,Σ(ψ) = ∑_k∑_gψ(g, k) Σ_g, k,and cdf_Y(ψ)(r) is a cumulative distribution function (c.d.f.) of Y(ψ) in r.Instead of computing the gradient as an integration over Y(ψ) to obtain the expected value (see Lemma 1), we use an automatic differentiation in JAX to numerically approximate the gradient of 1 - cdf_Y(ψ)(r). §.§ Baselines: This method compares all possible hard allocations and for a given difficulty level returns allocation that maximises criterion, thus, resulting in the complexity O(K^M), where K is a number of policies and M is a number of buckets – for large K and M, Bruteforce is not an option because of its non-polynomial complexity.§.§ Synthetic setupWe simulate a toy yet sufficient setup in order to illustrate typical situations in which our algorithm can make an advantage.Specifically, we generate parameters of Gaussian distributions for the setting of M=3 buckets and K=3 policies. We consider two cases: "large variance" and "small variance". The difference is that for "small variance", we scale variances by 0.01 factor. See Table <ref> for precise parameters of distributions and Fig. <ref> and <ref> for an illustration of policy distributions per bucket.For better describing the intuition, let us focus on the first bucket in both cases (first plots of the Figure <ref> and <ref> respectively).The "large variance" case represents the situation when μ_1,0=2, μ_1,1=1.9, Σ_1,0=9, Σ_1,1=1, so the difference in means μ_1,0 - μ_1,1 is much smaller than the difference between variances Σ_1,0 - Σ_1,1. If we assume now r=0, it becomes clear that policy π_1 is one that maximizes the success probability, however the "greedy" approach will choose π_0 because of the highest mean. In the "small variance" case, the relative difference between the variances is now much smaller, meaning that an effective range of r, where our algorithm can outperform "greedy", drastically decreases. To illustrate this, we plot in Fig. <ref> the difference of criterion values𝒞(π_1) - 𝒞(π_0) = ℙ(Y(π_1) > r) - ℙ(Y(π_0) > r)for a range of r.We define r_max as a point where cdf_π_0(r_max) = cdf_π_1(r_max). We can see that𝒞(π_1) - 𝒞(π_0) ≥ 0, r ∈ [-∞; r_max].The intuition is that while r grows, the left tail of the π_0 distribution gets outside of the 𝒮, then, we reach a point r_max, where two criterion values are equal.Finally,𝒞(π_1) - 𝒞(π_0) ≤ 0, ∀ r > r_maxdue to a bigger variance of the π_0 distribution.Comparing between the "large" and "small" variance cases, one can clearly see i) a gap in the potential "winning region" size and ii) a difference in the maximum value of 𝒞(π_1) - 𝒞(π_0).§.§ Synthetic setup results We use μ_0,k = [2,1.9,0], Σ_0,k = [9,1,9], r = 0 to show the convergence of our algorithm on Fig. <ref>. Note thatfound an optimal allocation [0,1,0], which differs from the one of , [1,0,0]. To check the algorithm performance where a source of randomness arising from the parameters estimation is presented, we generate normal distributions with parameters μ_g, k, Σ_g, k of sizes N ∈{1000, 10000} and use estimations μ̂_g, k, Σ̂_g, k in the algorithm.To test the noise coming from the parameters estimation, for both "large" and "small" variance cases we randomly split generated data into train/test parts, estimate μ̂_g, k, Σ̂_g, k on train, and check resulted allocations on both train and test. We repeat the procedure 100 times to build proper confidence intervals. The results for the "large variance" case are presented in Figures <ref> (for N=1000) and <ref> (for N=10000). The performance on train and test splits are very similar – it is reasonable as splits contain data from the same distribution. As one can see, the precision of μ̂_g, k, Σ̂_g, k estimation is higher for the N=10000 case which is reflected in smaller confidence intervals for each method. The results for the "small variance" case are provided in Figures <ref> (for N=1000) and <ref> (for N=10000). Due to the smaller original variance, confidence intervals are even smaller than in the previous case.§.§ Private datasetFor the real data cases, it is reasonable to have a direct interpretation of difficulty level r for the RCT success probability. Thus, here we interpret r as a relative gain in the outcome over the reference policy (or simply “Gain" hereafter) that we want to reach.§.§ Private dataset results Fig. <ref> describes results on the private dataset with one-dimensional outcome for the range of gains r for train (left) and test (right) splits. Notice that the only possible region to improve 𝒞(ψ) in both cases is [0.02, 0.03]. For instance, like , our algorithm initialized with ψ_0 from exploration reaches the Gain of 0.029 (2.9% over the reference) with the probability almost 1, while forit is around 0.7. § TWO-DIMENSIONAL OUTCOME §.§ CriterionIn this case we consider an outcome 𝒴∈ℝ^2, so 𝐘 = (Y^v, Y^c). The problem is parameterized by a two-dimensional difficulty level 𝐫 = (r_v, r_c) so that 𝒮={(r_v,+∞), (-∞, r_c]}.The criterion, then, is defined as maximizing the following probability[𝕀_𝒮(∑_u∈𝒰𝐘_u(ψ) ) ] = cdf_Y^c(r_c) - cdf_𝐘(𝐫),where 𝐘(ψ) ∼𝒩((ψ), (ψ)), (ψ) = ∑_k∑_gψ(g, k) _g,k, (ψ) = Var[∑_k∑_gψ(g, k) 𝐘_g,k]cdf_Y_c(r_c) is the (univariate) c.d.f. of Y_c at r_c and cdf_𝐘(𝐫) is the (bivariate) c.d.f. of 𝐘 at 𝐫cdf_𝐘(𝐫) = ∫_-∞^r_v∫_-∞^r_cf(x_v,x_c)dx_v dx_cwheref(x_v,x_c) is thep.d.f. of bivariate normal distribution. §.§ Baselines:and Thealgorithm (linear programming) solves the fractional knapsack problem (soft allocation) and returns a policy with the maximum mean value per bucket under the defined cost constraint. Along with the linear programming approach, we also implement(mixed-integer linear programming) that solves the 0/1 knapsack problem and returns a hard allocation.The main drawback of both algorithms for the success probability maximization problem is that only means of value and cost are used for optimization, while lacking information about the variance.For both methods, CVXPY Python library <cit.> was used for the implementation.§.§ Synthetic setupWe generate parameters of bivariate Gaussian distributions for the setting of M = 2 buckets and K = 3 policies. We consider two specific examples, see Table <ref> for precise parameters of distributions and Fig. <ref> for an illustration of policy distributions. In general, we keep the idea from the one-dimensional setup and create two examples. In example (i), π_1 has a bigger mean cost and a larger mean value, however, both variances are smaller than for π_0. For 𝐫 = (r_v, r_c) = (0, 3), an optimal policy is π_1, however,again will choose π_0 due to the larger μ^v_1,0 at the cost constraint μ^c_1,0.In example (ii), there is a positive difference in means μ^v_1,0 - μ^v_1,1 but it is much smaller than the difference between variances Σ^v_1,0 - Σ^v_1,1. At the same time, μ^c_1,1 is smaller than μ^c_1,0 and Σ^c_1,0 = Σ^c_1,1. If we fix 𝐫 = (r_v, r_c) = (0, 1), an optimal policy is π_1, howeverwill choose π_0 due to the larger μ^v_1,0 at the cost constraint μ^c_1,0. §.§ Synthetic setup resultsTo check the algorithm performance where estimation variability is present, we repeat the same procedure as for the one-dimensional setup, generating bivariate normal distributions of size N=1000 with defined parameters. The results for cases (i) and (ii) are presented in Figures <ref> and <ref> respectively.§.§ Private data resultsFig. <ref> describes results on the private dataset with two-dimensional outcome for the a range of gains r_c while r_v = 0 for train (left) and test (right) splits. For instance,reaches the Gain of -0.02 in cost (-2% over the reference) with probability 0.97 for train and 1 for test, while forrespective probabilities are 0.86 and 0.89.§.§ CRITEO-UPLIFT v2 results Absolute criterion values for train and test data splits are presented for , ,andon Figures <ref>, <ref>, <ref> and <ref> respectively. As we can see, among the other methods our algorithm is the most efficient and stable at the same time. § HYPERPARAMETERSThere are no hyperparameters for the baselines.includes three hyperparameters - initial allocation ψ_0, learning rate η and number of steps n_st. For ψ_0, we define three options: ψ_0 ∈{ψ^unif_0, ψ^baseline_0, ψ^expl_0},where ψ^unif_0 represents a uniform allocation, ψ^baseline_0 is a baseline allocation (fromin 1D case andin 2D), and ψ^expl_0 is an allocation from exploration, when we generate 50000 random allocations and pick one with the maximum criterion value.We consider the following possible sets of η and n_st:η∈{10^-1, 10^-2, 10^-3, 10^-4}, n_st∈{10^4, 10^5, 10^6, 5· 10^6} For each experiment, we did a grid search over the hyperparameters set aiming to maximize 𝒞(ψ). The resulted hyperparameters for each experiment are presented in Table <ref>.§ HARDWAREExperiments were performed on Linux machine with 8 CPUs (Intel(R) Xeon(R) Silver 4108 CPU @ 1.80GHz) and 16Gb of RAM. | http://arxiv.org/abs/2312.16267v1 | {
"authors": [
"Artem Betlei",
"Mariia Vladimirova",
"Mehdi Sebbar",
"Nicolas Urien",
"Thibaud Rahier",
"Benjamin Heymann"
],
"categories": [
"cs.IR",
"cs.GT",
"cs.LG",
"stat.ML"
],
"primary_category": "cs.IR",
"published": "20231226105533",
"title": "Maximizing the Success Probability of Policy Allocations in Online Systems"
} |
Departamento de Matemática, Universidad Técnica Federico Santa María, Valparaíso, Chile. [email protected] Instituto de Matemática, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile. [email protected] The second author was supported by Proyecto FONDECYT Postdoctorado 3220009, ANID, Chile.[2010]Primary 37A05, 37D25. In this paper we prove that the homotopy class ofnon-homothety linear endomorphisms on 𝕋^2 with determinant greater than 2 contains a C^1 open set of non-uniformly hyperbolic endomorphisms. Furthermore, we prove that the homotopy class of non-hyperbolic elements (having either 1 or -1 as an eigenvalue) whose degree is large enough contains non-uniformly hyperbolic endomorphisms that are also C^2 stably ergodic. These results provide partial answers to certain questions posed in <cit.>.Non-uniform hyperbolicity of maps on 𝕋^2 Kendry J. Vivas January 14, 2024 ======================================== § INTRODUCTION AND STATEMENTS OF THE RESULTS Lyapunov exponents have proved to be in the last years a powerful tool to studying chaos in dynamical systems.These quantities give the exponential rate of expansion or contraction of vectors along the orbits of a system. This theory turns in an active research field thanks to the works of Furstenberg, Kesten, Kingman, Ledrappier, Oseledets and others between the sixties and eighties, and it is present, for instance, in the study of random walks on groups and the Shrödinger operators. In this work, we study the particular interaction of Lyapunov exponents with smooth dynamics. Throughout this paper, we consider smooth conservative endomorphisms f:𝕋^2→𝕋^2, i.e., non-invertible mappings preserving the Lebesgue (Haar) measure μ. For a smooth map f and a pair (x,u)∈ T𝕋^2, the Lyapunov exponent of f at (x,u) is given by χ(x,u)lim sup_n→∞1/nlog‖Df^n(x)u‖.According to Oseledet's theorem <cit.> there is a full area subset M of 𝕋^2 such that the above limit exists for every x∈ M and every u≠ 0. Moreover, there is a measurable bundle E^- and measurable functions χ^- and χ^+ defined on M such that χ(x,u)=lim_n→∞1/nlog(m(Df^n(x)))=χ^-(x),∀u∈E^-(x),where m(A) denotes the minimum norm of a linear mapping, and χ(x,u)=lim_n→∞1/nlog‖Df^n(x)‖=χ^+(x),∀u∈ℝ^2∖E^-(x). It is easy to check that χ^+(x)≥χ^-(x) almost everywhere and∫[χ^+(x)+χ^-(x)]dμ(x)=∫log|Df(x)|dμ(x)>0, so that χ^+(x) is positive almost everywhere. In this way, we recall the definition of non-uniform hyperbolic system f given in <cit.>: The map f is non-uniformly hyperbolic (NUH for short) if χ^-(x)<0<χ^+(x) almost everywhere.The notion of NUH was introduced by Y. Pesin in <cit.> in order to generalize the classical hyperbolic theory. Trivial examples of NUH maps are the Anosov diffeomorphisms. The first example of a NUH mapthat is not Anosov was exhibited by A. Katok in <cit.>. It consist of a “slowdown” of a linear Anosov map near the origin by a local perturbation. Moreover, it is shown that any surface supports diffeomorphisms satisfying this property. Later, this result was generalized in <cit.>, showing that, in fact, any closed manifold supports this kind of map. More recently, Berger and Carrasco <cit.> constructed a volume-preserving partially hyperbolic diffeomorphism in 𝕋^4 whose two-dimensional central direction has the NUH property and has no dominated splitting. Furthermore, this construction is C^2-robust among volume-preserving diffeomorphisms.The Bochi-Mañe theorem <cit.> shows that NUH area-preserving diffeomorphisms on surfaces are fragile in the following sense: unless a diffeomorphism is Anosov, any conservative diffeomorphism f can be approximated in the topology C^1 by another diffeomorphism with zero exponents. Moreover, there exists a generic set of linear cocycles A:M→ SL(2,ℝ), which exhibit two distinct behaviors: Either exhibit uniform hyperbolicity or have both Lyapunov exponents equal to zero.However, M. Viana and J. Yang in <cit.> proved that the above result does not hold in the non-invertible case by exhibiting a C^0-5open set of SL(2,ℝ)-cocycles whose Lyapunov exponents are bounded far away from zero. Now, it is well known that any map f:𝕋^2→𝕋^2 is homotopic to a linear endomorphism induced by an integer matrix E that we denote by the same letter. So, in what follows we consider homotopy classes associated to linear endomorphisms E such that | E|≥ 2. These classes of maps consist of non-invertible local diffeomorphisms. In <cit.>, M. Andersson, P. Carrasco and R. Saghin showed that there exists a C^1-open set of NUH maps that intersects every homotopy class of linear endomorphisms E which are not homotheties whose degree is bigger than 5. In order to enunciate this result, we consider the set End_μ^1(𝕋^2) of C^1 local diffeomorphisms of 𝕋^2 preserving the Lebesgue measure μ. For f∈ End_μ^1(𝕋^2), define the numberC_χ(f):=sup_n∈ℕinf_(x,u)∈T^1𝕋^21/nI(x,u;f^n),whereI(x,u;f^n)=∑_y∈f^-n(x)log‖(Df^n(y))^-1u‖/|(Df^n(y))|.A similar expression for I(x,u;f) was considered in <cit.> to define a weak*-convergent sequence to an invariant measure μ^- for a non-invertible smooth map f called inverse SRB measure. Furthermore, this measure is supported on a hyperbolic repellor and it satisfies a Pesin-type formula involving the negative Lyapunov exponents of μ^-. Let consider 𝒰:={f∈End_μ^1(𝕋^2) : C_χ(f)>0}.By definition of C_χ(f), we see that 𝒰 is C^1-open. For f∈ End_μ^1(𝕋^2), denote by [f] the class of C^1 smoothly homotopic maps of f, i.e., [f]={g:𝕋^2→𝕋^2 :there is a C^1 homotopy between f and g}. Any f∈𝒰 is non-uniformly hyperbolic. Moreover, if E=(e_ij)∈ M_2× 2(ℤ) is not a homothety and | E|/gcd (e_ij)>4, then the intersection [E]∩𝒰 is non-empty, and in fact contains maps that are real analytically homotopic to E.The above result shows that, unlike <cit.>, the rigidity phenomenon is not present in the context of endomorphisms. Moreover, the NUH maps constructed in <cit.> have no dominated splitting in a robust way.In light of Theorem <ref>, the following question was posed in <cit.>:Question 1: Is i true that 𝒰 intersects all the homotopy classes of endomorphisms on 𝕋^2? Recently, V. Janeiro in <cit.> extends Theorem <ref> to homotheties whose degree is bigger than 5^2 and some small degree cases.In this paper, we will prove the following result: If E∈ M_2× 2(ℤ) is a non-homothety with | E|>2, then the intersection [E]∩𝒰 is non-empty, and in fact contains maps that are real analytically homotopic to E.In this way, by combining Theorem A with <cit.> we obtain a partial answer to Question 1 for the non-homothety case.The set 𝒰 intersects every homotopy class of endomorphisms in 𝕋^2 associated to non-homotheties E with | E|> 2.It should be noted that Theorem A is equivalent to <cit.>, but our result includes some cases that the author of the mentioned reference did not consider. Besides, by taking a look on the proof of <cit.> we observe that this result can be extended to the cases studied here. Now, following an analogous definitionin <cit.>, we say that an endomorphism f is C^2-stably ergodic for μ if there is a neighborhood 𝒰' of f in End_μ^2(𝕋^2) such that every g ∈𝒰' is also ergodic. It should be noted that in this definition we use a C^2-neighborhood instead of a C^1-neighborhood since the argument depends on the control of the bounds of the C^2-norms in a neighborhood of f.The theory of stably ergodicity was initiated in the pioneering work of M. Grayson, C. Pugh, and M. Shub <cit.>, and exhibit the time-one map of the geodesic flow on the unit tangent bundle of a surface with negative constant curvature as a first example. This theory has been widely studied in the context of diffeomorphisms. Indeed, in <cit.> was showed that C^2 volume-preserving partially hyperbolic diffeomorphisms satisfying certain conditions (stable dynamical coherence and stable accessibility) are stably ergodic. In <cit.>, a characterization of stable ergodicity and the denseness of this property were obtained for skew products. Examples of stably ergodic diffeomorphisms which are not partially hyperbolic were given in <cit.> and <cit.> respectively. In the context of endomorphsisms, the following result was proved in <cit.>: For any linear endomorphisms E as in Theorem <ref>, if ±1 is not an eigenvalue of E, then [E]∩𝒰 contains C^1 stably ergodic endomorphisms, i.e., there is a neighborhood 𝒰' of f in End_μ^1(𝕋^2) so that every g ∈𝒰' of class C^2 is also ergodic. So, according to above result the following question was posed in <cit.>:Question 2: Are there stably ergodic non-uniformly hyperbolic endomorphisms in every homotopy class of endomorphisms on 𝕋^2?It should be noted that the argument of Theorem <ref> relies on the classical Hopf argument and a result due to M. Andersson regarding the transitivity of area-preserving endomorphisms on 𝕋^2. It should be noted that this result and the non-domination property of these maps show that Corollary 5.2 of <cit.> does not hold for non invertible mappings. In this paper, we provide partial answer to Question 2 by proving C^2-stably ergodicity for linear maps E having either 1 or -1 as an eigenvalue and its degree is large enough. For linear endomorphisms E as in Theorem A, if | E|≥ m_0, where m_0∈ℕ large enough, then [E]∩𝒰 contains non-uniformly hyperbolic endomorphisms which are C^2-stably ergodic.§ PROOF OF THEOREM A§.§ Shears and its induced dynamics Before to prove Theorem <ref>, some previous results are needed.Let us consider ℋ_NH(𝕋^2)={E∈M_2×2(ℤ) : E≠0,E≠k·Id, k∈ℝ }. For E=(e_ij)∈ℋ_NH(𝕋^2), define τ_1=τ_1(E) and τ_2=τ_2(E) as τ_1=gcd(e_ij)andτ_2=|E|/τ_1. By definition both numbers are integers, τ_1 divides τ_2 and d=τ_1·τ_2, where d denotes the degree of E. These numbers are called elementary divisors of E. For matrices E as in Theorem <ref>, the pair (τ_1,τ_2) satisfies τ_2≥ 5. On the other hand, <cit.> covers all these pairs and it includes the pairs (τ_1,τ_2)=(3,3), (4,4). By <cit.>, up to a linear change of coordinates, we can assume thatE= ( [ e_11 e_12; e_21 e_22 ] ),where e_ij∈ℤ depends on (τ_1,τ_2) and e_11e_22-e_12e_21=d. Besides, the vector e_2=(0,1) is not an eigenvector of E (and so e_12≠ 0), and for every x∈𝕋^2 one has E^-1(x)=y+{(i/τ_2,j/τ_1) :i=0,…, τ_2-1, j=0,…, τ_1-1 }, where y is the unique point in ℝ^2 satisfying Ey=x.The main strategy in <cit.> to find non-uniformly hyperbolic maps homotopic to E is to consider a one-parameter family of area-preserving diffeomorphisms h_t, t∈ℝ, called shears. These shears deform the linear endomorphism E in a such way as to obtain an element f_t∈𝒰, which implies that f_t has a negative Lyapunov exponent. For this, the authors studied the dynamics of (Dh_t(y))^-1, where y is a pre-image of an element x∈𝕋^2. A key step in their argument is to exploit the fact that E has large degree (and hence, so does f_ t). Indeed, this property and the definition of h_t imply that there are a region 𝒢 containing the preimage (except at most one point) of every element of 𝕋^2, and a cone field Δ_α (which is invariant on 𝒢) with strong expansion on 𝒢 under (Df_t(·))^-1. In this way, the average of log‖(Df_t^n(y))^-1u‖ over the entire backward n-orbit of any point (x,u)∈ T^1𝕋^2 is uniformly positive as n goes to infinity. In <cit.>, this reasoning was extended to homotheties with a degree of at least 5^2 and some small-degree cases by considering suitable families of shears. However, both works did not consider the arrangement of the pre-images of higher order of a point.As a first stepin proving our result, we must consider the higher order pre-images of a point x∈𝕋^2 and determine the best way in which these points must be distributed. First, we will consider avery simple analytic function s:𝕋^1→ℝ that captures the essential dynamics of the map f_t derived from it, and we set f_t=E∘ h_t, t∈ℝ, where h_t(x)=(x,y+ts(x)),∀x=(x,y)∈𝕋^2. Note that the definitions of both h_t and f_t are the same to that given in <cit.>. In particular, h_t is inspired in the classical standard map <cit.>, where the map s(x) plays the role of sin(2π x). When s is a smooth map, h_t is a area-preserving diffeomorphism and f_t is C^1 homotopic to E. In coordinates, f_t(x) and f_t^-1(x), for x=(x,y)∈𝕋^2, can be written asf_t(x)=(e_11x+e_12 (y+ts(x)),e_21 x+e_22 (y+ts(x)))and f_t^-1(x)={(ψ_1(x,y,i),ψ_2(x,y,j)-ts(ψ_1(x,y,i))) :i=0,…, τ_2-1,j=0,…, τ_1-1},where ψ_1(x,y,i)=e_22x-e_12y+i/τ_2andψ_2(x,y,j)=e_11y-e_21x+j/τ_1. Take the partition 0<1/τ_2+1<…<τ_2/τ_2+1 of 𝕋^1. We will construct a piecewise linear function within each of the sub-intervals defined by the partition, alternating between positive and negative slopes. More precisely, define ŝ:𝕋^1→ℝ as follows: * Let ŝ be piece-wise linear in J_j=[j/τ_2+1,j+1/τ_2+1) with slopes a_j∈ℝ, such that a_j>0 if j is odd and a_j<0 otherwise, and | a_j+1|>2| a_j| for every j=0,…, τ_2. In this way, x_j=j/τ_2+1, j=0…, τ_2 are the critical points of ŝ. * ŝ(j/τ_2+1)=lim_x→ 1ŝ(x)=s_0∈ℚ∖ℤ for j=0,…τ_2. Let f̂_t=E∘ĥ_t, where ĥ_t(x)=(x,y+tŝ(x)), for any x=(x,y)∈𝕋^2. Note that f̂_t satisfies the equations (<ref>) and (<ref>). Define the critical set 𝒞_τ_2 as𝒞_τ_2=⋃ℓ_j,ℓ_j={x_j}×𝕋^1,j=0,…, τ_2. Already defined the function ŝ, take δ>0 and let I_j=[j/τ_2+1-δ,j/τ_2+1+δ], j=0,…, τ_2.In a similar way to <cit.>, we define the critical region and the good region as𝒞=(⋃_j=0^τ_2 I_j)×𝕋^1,and 𝒢=𝒢^-∪𝒢^+, where 𝒢^-=(⋃_j is evenJ'_j)×𝕋^1and𝒢^+=(⋃_j is oddJ'_j)×𝕋^1,respectively, with J_j'=J_j∖ (I_j∪ I_j+1), for j=0,…, τ_2. Then, we make the map ŝ to be analytical on ⋃_jI_j with zero derivative at j/τ_2+1, and s=ŝ on 𝕋^1∖𝒞, to obtain an analytical map s:𝕋^1→ℝ and a smooth local diffeomorphism f_t which is C^1 homotopic to E. Next lemma shows that the analytic map f_t obtained fromabove construction, for certain values of t, has a good distribution of its higher order pre-images.Let E∈ℋ_NH(𝕋^2) with elementary divisors(τ_1,τ_2)=, τ_2≥ 3. Then, there exists an analytic function s:𝕋^1→ℝ such that the map f_t=E∘ h_t is C^1-homotopic to E and satisfies the following: For every point x∈𝕋^2 the set f_t^-1(x) has d elements, of which there is at least τ_1⌊τ_2-1/2⌋ of them inside each one of 𝒢^- and 𝒢^+, and at most τ_1 of them inside 𝒞. Furthermore, at least one pre-image y∈𝒢 satisfies d(y,𝒞)>1/8(d+1). In addition,there are infinitely many and arbitrarily large numbers t>0 such that𝒞∩ f_t^-1(𝒞)∩ f_t^-2(𝒞)=∅.The first property is a direct consequence of the construction of f_t and the regions.On the other hand, let δ>0 such that 1/τ_2+1-2δ>1/2(τ_2+1).Notice, by the choice of δ, that every connected component J of the good region has size bigger than 1/2(τ_2+1), so that a band of size 1/4(τ_2+1) in the middle of some of these components J contains one pre-image. Thus, the distance of this point to the boundary is bigger than 1/2(1/2(τ_2+1)-1/4(τ_2+1))=1/8(τ_2+1)>1/8(d+1). For the last part, let f̂_t given in the construction of f_t, and let x=(x,y)∈𝒞_τ_2∩f̂_t^-1(𝒞_τ_2). Then, x=j/τ_2+1, for some j=0,…, τ_2, and by (<ref>), item (b), and definition of x one has by a straightforward computation thatx=(j_0/τ_2+1,j_1-e_11j_0/e_12 (τ_2+1)-ts_0),j_0,j_1=0,…, τ_2.Now, notice by (<ref>) that π_1(f̂^-1(x))=1/τ_2((e_11+e_22)j_0-j_1/τ_2+1+bts_0+i), where π_1:𝕋^2→𝕋^1 is the projection on the first coordinate. Hence, if t>0 satisfies t≠1/s_0e_12(n_1+n_2/(τ_2+1)), n_1, n_2∈ℤ, we have that f̂_t^-1(x)⊂𝒢. Moreover, it is possible to choose t in a such way that f̂_t^-1(x) is away from 𝒞_τ_2. This shows that𝒞_τ_2∩f̂_t^-1(𝒞_τ_2)∩f̂_t^-2(𝒞_τ_2)=∅.In other words, the above relation says that for any point x∈𝕋^2, there are not three consecutive pre-images in the critical set 𝒞_τ_2. Therefore, the result is obtained by taking δ>0 such that (<ref>) holds for f_t and 𝒞.§.§ Dynamics of Df_t. Now, we are interested in studying the dynamics of the linear map Df_t. For this, we follow <cit.>. Since the expression of C_χ(f) is independent of the choice of the norm on the tangent bundle of 𝕋^2, the authors in the mentioned reference chose the maximum norm by the sake of simplicity in the computations.Recall that if α>0, the horizontal cone Δ_α^h is defined as Δ_α^h={(u_1,u_2)∈ℝ^2 : |u_2|≤α|u_1|},while the vertical cone Δ_α^v is given by Δ_α^v=ℝ^2∖Δ_α^h. It should be note that ifE∈ℋ_NH(𝕋^2), there is α>1 such thatE^-1Δ_α^v⊂E^-1Δ_α^v⊂Int(Δ_α^h)⊂Δ_α^h. In <cit.> was proved that the following properties hold for (Df_t)^-1:* If y∈𝒢, then Δ_α^v is strictly invariant for (Df_t(y))^-1. * If u∈Δ_α^v is a unit vector, then ‖(Df_t(y))^-1u‖≥m((Df_t(y))^-1)>e_v(a-α/t)/αt ify∈𝒢e_v/αif y∈𝒞, where e_v=inf_u∈Δ_α^v, ‖ u‖=1‖ E^-1u‖. * If u∈Δ_α^h, and E^-1u=(u_1,u_2), let *(u) be the sign of -u_1/u_2 (with the convention that 0 and ±∞ have both + and - sign). Then, for all y∈𝒢^*(u) we have (Df_t(y))^-1u∈Δ_α^v. * If u∈Δ_α^h is a unit vector, then ‖(Df_t(y))^-1u‖≥m((Df_t(y))^-1)>e_h ify∈𝒢^*(u)e_h/(b+1/t)t^-1if y∉𝒢^*(u), where e_h=inf_v∈Δ_α^h, ‖ u‖=1‖ E^-1u‖,for every t>2α/a, where 0<a<b are the lower and upper bounds, respectively, of the derivative of s on 𝕋^1∖(⋃_j=0^dI_j). For the maps given here one has a=a_0 and b=a_d.Another useful tool for the proof of Theorem <ref> is the next lemma, whose proof is analogous to that showed for Lemma 2.2 in <cit.> by taking g=f^k:For f∈ End_μ^1(𝕋^2) and any n,k∈ℕ we have I(x,u;f^kn)=∑_i=0^n-1∑_y∈f^-ki(x)1/(Df^ki(y))I(y,F^-ki(y)u;f^k), where F^-ki(y)u=(Df^ki(y))^-1u/‖(Df^ki(y))^-1u‖. In other words, 1/nkI(x,u;f^nk) is a convex combination of other I(y,w;f^k).§.§ Key Lemmas.The next lemmas will be useful for proving Theorem A. Recall that d=τ_1·τ_2.Let E∈ℋ_NH(𝕋^2) whose elementary divisors are (τ_1,τ_2), τ_2≥ 3, and let f=f_t as in Lemma <ref>. Then, for every x∈𝕋^2 and every unit vector u∈ T^1𝕋^2 we have the following: * There are at leastτ_1^3[(τ_2-1)^3+(τ_2-1)(3⌊τ_2-1/2⌋+1)-⌊τ_2-1/2⌋^2] vectors in Δ_α^v for(Df^3)^-1 if u∈Δ_α^v.* There are at leastτ_1^3[(τ_2-1)^3-(τ_2-1-⌊τ_2-1/2⌋)^3+(τ_2-1)(3⌊τ_2-1/2⌋+1)-⌊τ_2-1/2⌋^2]vectors in Δ_α^v for(Df^3)^-1 if u∈Δ_α^h. Let u∈Δ_α^v. Assume thatf^-1(x) has one critical point. In this case, by Lemma <ref> and property (NH1) we have that there are τ_1(τ_2-1) vectors in Δ_α^v and τ_1 vectors in Δ_α^h under the action of (Df(y))^-1, where y∈ f^-1(x).Now, for each of the τ_1(τ_2-1) vertical vectors from the previous step, we again apply Lemma <ref> and (NH1) to obtain (τ_1(τ_2-1))^2 vectors in Δ_α^v and τ_1^2(τ_2-1) vectors in Δ_α^h. By applying Lemma <ref> and property (NH1) once more, we obtain (τ_1(τ_2-1))^3 vectors in Δ_α^v associated to the (τ_1(τ_2-1))^2 vertical vectors. Simultaneously, for the τ_1(τ_2-1) horizontal vectors we obtain τ^3(τ_2-1)vectors in Δ_α^v. For the remaining τ_1 horizontal vectors v, we apply Lemma <ref> and property (NH3) to obtain τ_1^2⌊τ_2-1/2⌋ vectorsin Δ_α^v and τ_1^2(τ_2-1-⌊τ_2-1/2⌋) vectors in Δ_α^h, which are associated to points in 𝒢^*(v) and 𝒢^-*(v) respectively, and τ_1^2 vectorsin Δ_α^h. On one hand, for the τ_1^2⌊τ_2-1/2⌋ vertical vectors we obtain τ_1^3(τ_2-1)⌊τ_2-1/2⌋ vectors in Δ_α^v by Lemma <ref> and property (NH1). For the τ_1^2(τ_2-1-⌊τ_2-1/2⌋) horizontal vectors we obtain τ_1^3⌊τ_2-1/2⌋(τ_2-1-⌊τ_2-1/2⌋) vectors in Δ_α^v by property (NH3). On the other hand, notice that each one of the τ_1^2 horizontal vectors w are associated to points z∈𝒞∩ f^-1(𝒞). Hence, by (<ref>), one has f^-1(z)⊂𝒢. In this case, if w∈ T_z 𝕋^2 is a unit vector and E^-1w∈Δ_α^h, then by (NH1) we get τ_1τ_2 vectors in Δ_α^v for (Df)^-1, while if w'=E^-1w=±(w,1)∈Δ_α^v and q∈𝒢, we see that (Dh_t(q))^-1w'∈Δ_α^h if and only if 1/a_it+α≤| w|≤1/a_it-α for some i=0,…, τ_2. So, since s=ŝ on 𝒢, we have by item (a) of the construction of ŝ that [1/a_jt+α,1/a_jt-α] for every j≠ i, which shows that Dh_t(q)w'∈Δ_α^v for every q∈(𝒢∖(J'_i×𝕋^1)). Thus, there are τ_1(τ_2-1) vectors in Δ_α^v for (Df)^-1. Therefore, we obtain at least τ_1^3(τ_2-1) vectors in Δ_α^v.Therefore, we get the first item by adding the above quantities. In a similar way the second item is obtained. For every n∈ℕ any nonzero tangent vector (x,u)∈ T^1𝕋^2 define Df^-3n(x,u) = {(y,w)∈ T𝕋^2 : f^3n(y)=x, Df^3n(y)w=u},𝒢_n = { (z,w)∈ Df^-3n(x,v) : w∈Δ_α^v},ℬ_n = Df^-3n(x,u)∖𝒢_n,g_n = #𝒢_n, b_n = #ℬ_n=d^3n-g_n.For later reference, we define the numbers given in Lemma <ref> as v_1(d)=τ_1^3v_1(τ_2) and v_2(d)=τ_1^3v_2(τ_2).Notice thatg_n+1/d^n+1≥(1/τ_2^3(v_1(τ_2)-v_2(τ_2)))·g_n/d^n+1/τ_2^3·v_2(τ_2),∀n∈ℕ.Let a_n+1=g_n+1/d^n+1, n∈ℕ, c=1/τ_2^3(v_1(τ_2)-v_2(τ_2)) and e=1/τ_2^3· v_2(τ_2). It should be noted that0<c=1/τ_2^3(v_1(τ_2)-v_2(τ_2))=(τ_2-1-⌊τ_2-1/2⌋/τ_2)^3<1. Hence, by induction on n, a_n≥e/1-c·(1-c^n)=v_2(τ_2)/τ_2^3-(v_1(τ_2)-v_2(τ_2))·(1-c^n),∀n∈ℕ.Thus, as in <cit.>, definep(τ_2)=lim inf a_n=v_2(τ_2)/τ_2^3-(v_1(τ_2)-v_2(τ_2))∈ (0,1),which represents the “asymptotic ratio" of vertical vectors among the pre-images of (x,u) under Df. Let E∈ℋ_NH(𝕋^2) whose elementary divisors are (τ_1,τ_2), τ_2≥ 3, and let f=f_t as in Lemma <ref>. Then, p(τ_2)>1/2. Notice that by Lemma <ref> we have v_2(τ_2)/τ_2^3-(v_1(τ_2)-v_2(τ_2)) = 1+v_1(τ_2)-τ_2^3/τ_2^3-(v_1(τ_2)-v_2(τ_2))= 1+N(τ_2)/D(τ_2),whereN(τ_2)=-3τ_2^2+3τ_2-1+(τ_2-1)(3⌊τ_2-1/2⌋+1)-⌊τ_2-1/2⌋^2 and D(τ_2)=τ_2^3-(τ_2-1-⌊τ_2-1/2⌋)^3. Now, since τ_2≥ 3 and τ_2-1>⌊τ_2-1/2⌋ we have 0>N(τ_2) > -3τ_2^2+3τ_2-1+(τ_2-1)^2= -2τ_2^2+τ_2>-2τ_2^2and D(τ_2) ≥ (1+⌊τ_2-1/2⌋)(3τ_2^2-3τ_2(τ_2+1/2)+(1+⌊τ_2-1/2⌋)^2)= (1+⌊τ_2-1/2⌋)(1/2(3τ_2^2-τ_2)+⌊τ_2-1/2⌋^2) ≥ (1+⌊τ_2-1/2⌋)(2τ_2^2+⌊τ_2-1/2⌋^2)≥ 4(τ_2^2+2). Therefore, p(τ_2)=1+N(τ_2)/D(τ_2)>1-2τ_2^2/4(τ_2^2+2)≥1-1/2=1/2.This proves the result. Notice that for τ_2≥ 5 one has 2τ_2^2/(1+⌊τ_2-1/2⌋)(2τ_2^2+⌊τ_2-1/2⌋^2)≤2τ_2^2/6(τ_2^2+2)<1/3(τ_2^2+2),which shows that p(τ_2)>2/3.Recall that Corollary 3.1 of <cit.> states that I(x,u;f)≥(1-1/τ_2)logt+C_1(t),∀(x,u)∈T^1𝕋^2, v∈Δ_α^vandI(x,u;f)≥-(1-1/τ_2⌊τ_2-1/2⌋)logt+C_2(t),∀(x,u)∈T^1𝕋^2, v∈Δ_α^h, where C_k(t)∈ℝ, k=1,2, depends on a,b,τ_1,τ_2, t. Denote c_1=(1-1/τ_2) and c_2=-(1-1/τ_2⌊τ_2-1/2⌋). According to above Lemma, denote * v^v(d)=τ_1(τ_2-1),* v^h(d)=τ_1⌊τ_2-1/2⌋,* v^v(d^2)=τ_1^2((τ_2-1)^2+⌊τ_2-1/2⌋), * v^h(d^2)=τ_1^2(⌊τ_2-1/2⌋(2τ_2-1-⌊τ_2-1/2⌋)+⌊τ_2-1/2⌋). Let E∈ℋ_NH(𝕋^2) whose elementary divisors are (τ_1,τ_2), τ_2≥ 3, and let f=f_t as in Lemma <ref>. Then, for every x∈𝕋^2 and every unit vector u∈ T^1𝕋^2 we have: For u∈Δ_α^v, I(x,u;f^3)≥I_1(t)logt+A(t), where I_1(t)=c_1+(c_1-c_2)(v^v(d)/d+v^v(d^2)/d^2)+2d^2-1/d^2c_2-1/d^2τ_2and A(t)∈ℝ, while for u∈Δ_α^h, I(x,u;f^3)≥I_2(t)logt+B(t),where I_2(t)=c_2+(c_1-c_2)(v^h(d)/d+v^h(d^2)/d^2)+2d^2-1/d^2c_2-1/d^2τ_2and B(t)∈ℝ.Let (x,u)∈ T^1𝕋^2 a nonzero unit tangent vector, and let t>2α/a. By Lemma <ref> we have that I(x,u;f^3) = I(x,u;f)+1/d∑_y∈ f^-1(x)I(y,F^-1(y)v;f)+1/d^2∑_z∈ f^-1(y)I(z,F^-1(z)w;f). Notice that in the proof of Lemma <ref> we see that for each of the τ_1^2 horizontal vectors w associated to points z∈𝒞∩ f^-1(𝒞) one obtain τ_1(τ_2-1) vectors in Δ_α^v. Hence, by property (NH4) one has I(z,w,f)=-1/τ_2logt+(1-1/τ_2)loge_h(b+1/t)^τ_1(τ_2-1).In this way we have thatI(x,u;f^3) ≥c_1log t +1/d(v^v(d)c_1+(d-v^v(d))c_2)log t +1/d^2(v^v(d^2)c_1+(d^2-1-v^v(d^2))c_2-1/τ_2)log t+A(t)= I_1(t)+A(t). In a similar way we obtain the second inequality.Let p=p(τ_2)∈(0,1) as (<ref>). Then, J(t)=pI_1(t)+(1-p)I_2(t)>0. First, note that pI_1(t)+(1-p)I_2(t)=(∑_i=1^3E_i(τ_2)+3c_2-1/d^2(c_2+1/τ_2))log t, where E_1(τ_2)=p(c_1-c_2)(τ_2)=p(2-1/τ_2(⌊τ_2-1/2⌋+1)),E_2(τ_2) = (c_1-c_2)(τ_2)E_1'(τ_2)= (c_1-c_2)1/τ_2(p(τ_2-1-⌊τ_2-1/2⌋)+⌊τ_2-1/2⌋)and E_3(τ_2) = (c_1-c_2)(τ_2)E_2'(τ_2)= (c_1-c_2)1/τ_2^2(p((τ_2-1)^2-⌊τ_2-1/2⌋(2(τ_2-1)-⌊τ_2-1/2⌋))) +(c_1-c_2)1/τ_2^2(⌊τ_2-1/2⌋(2τ_2-1-⌊τ_2-1/2⌋)). Now, notice that -1/d^2(c_2+1/τ_2)>0. So, in order to prove the result, we must to prove that S(τ_2)=∑_i=1^3E(τ_2)+3c_2(τ_2)>0,∀τ_2≥ 3.It is clear that (<ref>) holds for τ_2=3. Assume that (<ref>) holds for τ_2=n. Then, we consider the following cases: Case 1: n is odd. In this case, we have the following identities: * (c_1-c_2)(n+1)=(c_1-c_2)(n)+1/n(n+1)(⌊n-1/2⌋+1),* c_2(n+1)=c_2(n)-1/n(n+1)⌊n-1/2⌋, * E_1'(n+1)=E_1'(n)+p/n, and* E_2'(n+1)=E_2'(n)+1/n^2(p(2n-1-2⌊n-1/2⌋)+2⌊n-1/2⌋).On one hand, since ⌊n-1/2⌋+1=⌊n+1/2⌋n+1/2, we have (c_1-c_2)(n)=2-1/n(n+1/2)≥3/2-1/2n≥4/3. and (c_1-c_2)(n)2/n^2⌊n-1/2⌋=8/3n^2⌊n-1/2⌋≥8/3n(n+1)⌊n-1/2⌋.On the other hand, 1/n(n+1)(⌊n-1/2⌋+1)=1/2n, so that 1/n(n+1)(⌊n-1/2⌋+1)E_1'(n+1)≥1/2n^2⌊n-1/2⌋≥1/2n(n+1)⌊n-1/2⌋. Hence, by induction hypothesis and the above estimations,S(n+1)+3c_2(n+1) ≥S(n)+3c_2(n)-3/n(n+1)⌊n-1/2⌋ +(c_1-c_2)(n)2/n^2⌊n-1/2⌋ +1/n(n+1)(⌊n-1/2⌋+1)E_1'(n+1)> -3/n(n+1)⌊n-1/2⌋+8/3n(n+1)⌊n-1/2⌋ +1/2n(n+1)⌊n-1/2⌋>0. Case 2: n is even. In this case, we have the following identities: * (c_1-c_2)(n+1)=(c_1-c_2)(n)+1/n(n+1)(⌊n-1/2⌋+1)-1/n+1,* c_2(n+1)=c_2(n)+1/n(n+1)⌊n-1/2⌋-1/n+1, * E_1'(n+1)=E_1'(n)+1/n, and* E_2'(n+1)=E_2'(n)+2/n.On the other hand, since 1+⌊n-1/2⌋=n/2, one has (c_1-c_2)(n)≥2-1/n(⌊n-1/2⌋)=2-1/n·n/2=2-1/2=3/2, so that1/n^2(c_1-c_2)(n)⌊n-1/2⌋≥3/2n(n+1)⌊n-1/2⌋. Now, by Remark <ref> one has p=p(n+1)≥2/3 (because n≥ 4), which implies 1/n(n+1)(1+⌊n-1/2⌋)·p>2/3n(n+1)⌊n-1/2⌋+2/3n(n+1). Fruthermore, 1/n(n+1)(1+⌊n-1/2⌋)E_1'(n+1) > 1/2(n+1)(5/3n⌊n-1/2⌋)= 5/6n(n+1)⌊n-1/2⌋. Besides, since (n-1)^2≥ (n-1)≥ 2⌊n-1/2⌋, we obtain 1/n(n+1)(1+⌊n-1/2⌋)E_2'(n+1) ≥ 1/2n^2(n+1)⌊n-1/2⌋ +1/2(n+1)(4/3n^2⌊n-1/2⌋)= 2/3n^2(n+1)⌊n-1/2⌋. Finally, since p<1, p+E_1'(n+1)+E_2'(n+1)< 3+1/n^2+1/n^2⌊n-1/2⌋.Therefore, by induction hypothesis and the above estimations, S(n+1)+3c_2(n+1) ≥S(n)+3c_2(n)-3/n(n+1)⌊n-1/2⌋+3/n+1 +1/n^2(c_1-c_2)(n)⌊n-1/2⌋ +1/n(n+1)(1+⌊n-1/2⌋)· p +1/n(n+1)(1+⌊n-1/2⌋)(E_1'(n+1)+E'_2(n+1)) -1/n+1(p+E_1'(n+1)+E_2'(n+1))> -(3/n(n+1)+1/n^2(n+1))⌊n-1/2⌋ +(3/n(n+1)+7/6n^2(n+1))⌊n-1/2⌋>0. This proves the result.§.§ Non-uniform hyperbolicity. Now, we are ready to prove Theorem <ref>. The proof of this result is a consequence of Key Lemmas and the the argument presented in the proof of <cit.>. Define for i=0,...,n-1 J_i=J_i(x,u)=∑_y∈ f_t_0^-3i(x)1/(Df_t_0^3i(y))I(y,F_t_0^-3i(y)u;f^3). Notice that if 𝒢_n and ℬ_n as in the previous section, one hasJ_i=1/d^3i∑_(y,w)∈𝒢_iI(y,w;f^3)+1/d^3i∑_(y,w)∈𝒢_iI(y,w;f^3). Let E∈ℋ_NH(𝕋^2) with elementary divisors (τ_1,τ_2), τ_2≥ 3. By Lemma 2.6, Lemma 2.7 and Proposition 2.1, we have for t>2α/a satisfying the conditions of Lemma <ref> that lim_i→∞ J_i ≥p(τ_2)I_1(t)+(1-p(τ_2))I_2(t)+C(t) =J(t)log t+C(t)≥ 1/d^2(1-1/τ_2(⌊n-1/2⌋+1))log t+C(t),where C(t)>C∈ℝ. Hence, for every non-zero tangent vector (x,v), a large enough t>0 and every i≥ i_0∈ℕ, we have J_i(x,u)≥ 3c'>0. Then, by Lemma <ref> we have, for n_0∈ℕ large enough, that 1/3n_0I(x,u;f_t^3n_0)=1/3n_0∑_i=0^n_0-1J_i(x,v)>c'/2>0,∀(x,u)∈T^1𝕋^2,v≠0. Therefore, C_χ(f)>0. So, by <cit.>, it follows that f is NUH. § PROOF OF THEOREM B In this section, we will prove Theorem B. The proof of stable ergodicity in <cit.> relies on two main arguments: * Hopf argument: They proved that the stable manifold for almost every point in 𝕋^2 has large diameter in order to ensure intersections between stable and unstable manifolds. In particular, this shows that the ergodic components of μ are open modulo zero sets.* A criterion of transitivity for area-preserving endomorphisms on 𝕋^2: If a linear map on 𝕋^2 of degree at least two has no real eigenvalues of modulo one, then its whole homotopy class of area-preserving endomorphsisms consists entirely of transitive elements (see <cit.>).The first step of above argument is guaranteed by the NUH property and some properties of the map f_t, for t>0 large enough. However, for non-hyperbolic matrices E, the transitivity criteria can no longer be applicable. So, in our case, to prove stable ergodicity for μ, some additional work is necessary.§.§ PreliminariesRecall that for a C^1 curve γ:I⊂ℝ→𝕋^2, whose coordinates are γ(t)=(γ_1(t),γ_2(t)), its length in the maximum norm is given by ℓ_m(γ)=∫_Imax{|γ_1'(t)|,|γ_2'(t)|}dt. Note that if ℓ_e(γ) denotes the euclidean length of γ, one hasℓ_m(γ)≤ℓ_e(γ)≤√(2)ℓ_m(γ).From <cit.>, we called v-segment to a C^1 curve γ which is tangent to the vertical cone Δ_α^v andℓ=ℓ_m(γ)=α/5e_v, where e_v=inf_u∈Δ_α^v, ‖ u‖=1‖ E^-1u‖. In what follows we take α>1 such that ℓ>1. In <cit.> was observed that the length of the projection on the vertical axis of a v-segment γ is exactly ℓ.In this case we say that γ crosses vertically 𝕋^2. From above remark, we say that a C^1 curve γ' crosses horizontally 𝕋^2 if it is tangent to the horizontal cone and its projection on the horizontal axis has size bigger than 1.Now, for a endomorphism f:𝕋^2→𝕋^2, the natural extension or space of pre-orbits of f is defined as L_f:={x̂=(x_0,x_1,x_2…)∈(𝕋^2)^ℤ_+ : f(x_i+1)=x_i,∀i≥0}, endowed with the product topology. Let π_ext:L_f→𝕋^2 be the projection onto the first coordinate, i.e., π_ext(x̂)=x_0 for every x̂∈ L_f. Define f̂:L_f→ L_f by f̂(x̂)=(f(x_0),x_0,x_1,…),x̂∈L_f. It is easy to check that f̂ is a homeomorphism and π_ext∘f̂=f∘π_ext. Besides, it is well known that for any invariant measure ν for f, there is a unique invariant measure ν̂ for f̂ such that (π_ext)_*ν̂=ν. The measure ν̂ is called the lift of ν. On the other hand, the action of π_ext on L_f allows us to define a tangent bundle on L_f: For x̂=(x_0,x_1,…)∈ L_f, let us consider T_x̂L_f=T_π_ext(x̂)𝕋^2=T_x_0𝕋^2. In the same way, the derivative Df of f lifts to a map Df̂ on T_x̂L_f in a natural way as Df̂(x̂)=Df(x_0). Moreover, for every x̂∈ L_f, Df̂^n(x̂)=Df̂(f̂^n-1(x̂))∘…∘Df̂(x̂)=Df̂^n(x_0) ifn>0Id ifn=0(Df̂(f̂^n(x̂)))^-1∘…∘(Df̂(f̂^-1x̂))^-1=(Df^n(x_n))^-1ifn<0. Following the above, an endomorphism f admits a nontrivial dominated splitting if the tangent bundle splits into two subbundles TL_f=E⊕ F such that (i) E_x̂ and F_x̂ are invariant by Df̂(x̂).(ii) The subbundles E and F are continuous, i.e., E_x̂ and F_x̂ vary continuously with x̂∈ L_f.(iii) ∠(E_x̂,F_x̂)≥ρ>0 for every x̂∈ L_f.(iv) There are constants c>0 and λ∈(0,1) such that for any x̂∈ L_f,‖ Df̂^n(x̂)v_E‖≤ cλ^n‖ Df̂^n(x̂)v_F‖,for all v_E∈ E_x̂, v_F∈ F_x̂, ‖ v_E‖=‖ v_F‖=1 and for all n≥ 1. On the other hand, the relation of the Lyapunov exponents for f and f̂ is as follows: χ_f̂(x̂,v) = χ_f(π_ext(x̂),v), χ_f̂^+(x̂) = χ_f^+(π_ext(x̂)), χ_f̂^-(x̂) = χ_f^-(π_ext(x̂)).By <cit.>, there is a completely invariant set ℛ̂⊂ L_f of full μ̂-measure such that χ_f̂^+(x̂)=-χ_f̂^-1^-(x̂)=χ_f^+(x_0)andχ_f̂^-(x̂)=-χ_f̂^-1^+(x̂)=χ_f^-(x_0),where x_0=π_ext(x̂), for every x̂∈ℛ̂. Besides, there is a completely invariant set ℛ⊂π_ext(ℛ̂) with μ(ℛ)=1 satisfying μ̂_x(ℛ̂)=1 for every x̂∈ℛ, where μ̂_x is the unique measure on π^-1_ext(x) satisfyingμ̂_x({(ξ_0,ξ_1,…)∈π^-1_ext(x) : ξ_i=x_i})=|Df^i(x_i)|^-1.Moreover, for any x̂∈ℛ̂ there are subspaces E_x̂^± of ℝ^2 such thatχ_f̂^±(x̂)=χ_f̂(x̂,v)=-χ_f̂^-1(x̂,v)=-χ^∓_f̂^-1(x̂),∀v∈E_x̂^±∖{0}.Recall the definition of Pesin stable and unstable manifolds given in <cit.>. Let x̂∈ℛ̂, π_ext(x̂)=x. Unlike the invertible case, the notion of unstable manifold depends on the pre-orbit of a point x∈𝕋^2. More precisely, the local unstable manifold at x̂ is a C^1 curve defined by W_loc^u(x̂)={y∈𝕋^2: ∃! ŷ∈L_f,π_ext(ŷ)=y, d(x_n,y_n)≤ C_1e^-nεandd(x_n,y_n)≤ C_2e^-nλ ,∀n≥0},for some constants λ>0, 0<ε<λ/200 and 0<C_1≤ 1< C_2. We denote by Ŵ^u_loc(x̂) the lift of W^u_loc(x̂) to L_f under π_ext. Since the unstable manifold W^u_loc(x̂) depends on the pre-orbit of x, there is a bouquet of these manifolds passing through x. The unstable manifold of f at x̂ is given by W^u(x̂)={y_0=π_ext(ŷ) : lim sup_n→∞1/nlogd(x_n,y_n)<0}, and its corresponding lift we denote by Ŵ^u(x̂). On the other hand, the local stable manifold at x, denoted by W^s(x), isa C^1 curve defined byW_loc^s(x)={y∈𝕋^2:d(f^n(x),f^n(y))≤ C_1e^-nεandd(f^n(x),f^n(y))≤ C_2e^-nλ ,∀n≥0},for some constants λ>0, 0<ε<λ/200 and 0<C_1≤ 1< C_2. We denote by Ŵ^s_loc(x̂) the lift of W^s_loc(x) to L_f under π_ext. The stable manifold of f at x is given by W^s(x)=π_ext(⋃_n=0^∞f̂^-n(Ŵ^s_loc(f̂^n(x̂)))).According to <cit.>, there is an increasing countable family {Λ̂_k}_k≥ 0 of compact subsets of ℛ̂ such that μ̂(⋃_k=0^∞Λ̂_k)=1, satisfying the following properties:* W^u(·) is continuous on Λ̂_k, and T_x_0W^u_loc(x̂)=E^+(x̂) for any x̂∈Λ̂_k, π_ext(x̂)=x_0, for every k≥ 0. Moreover, for every x̂∈Λ̂_k there is a sequence of C^1 curves { W^u(x̂,-n)}_n≥0 in 𝕋^2 such that* W^u(x̂,0)=W^u_loc(x̂),* W^u(x̂,-n+1)=f(W^u(x̂,n)), for every n≥ 1, * W^u(x̂)=⋃_n=0^∞f(W^u_loc(x̂),-n). * If Λ_k=π_ext(Λ̂_k), then W^s(·) is continuous on Λ_k. Besides,* T_x_0W^s_loc(x_0)=E^-(x̂). Furthermore, E^-(·) is continuous on Λ_k. * f(W^s_loc(x_0))⊂ W^s_loc(f(x_0)).The sets Λ̂_k and Λ_k are the so-called Pesin blocks for the systems (f̂,μ̂) and (f,μ) respectively. Note that the manifolds W^s (Ŵ^s) form an invariant lamination of 𝕋^2 (L_f), but in general the manifolds W^u do not form an invariant lamination because different elements of this family may be intersect. However,the manifolds Ŵ^u do form an invariant lamination of L_f. Besides, in <cit.> and <cit.> were proved absolute continuity properties for the laminations W^s and Ŵ^u respectively. We state the results for the sake of completeness. Given any Pesin block Λ_k, the holonomy of local stable manifolds of points in Λ_k between any two transversals is absolutely continuous w.r.t. the Lebesgue measure of the two transversals.Given any partition of L_f subordinated to the Pesin unstable lamination Ŵ^u, the disintegrations m̂û along the elements of the partition are absolutely continuous w.r.t. the Lebesgue measure on the unstable manifolds. §.§ Dynamical and ergodic properties for f_t To begin this subsection we prove the non-domination property of the maps f_t obtained in Theorem A. Let f_t:𝕋^2→𝕋^2 as in Theorem A. Then, f_t has no dominated splitting in a robust way. Since f_t∈𝒰, there is a C^1-neighborhood V of f_t contained in 𝒰. In particular, C_χ(g)>β>0 for every g∈ V. Moreover, we choose V in a such way that | Dg(x)|≥ m_0>1 for every g∈ V. Assume that g has a dominated splitting T𝕋^2=E⊕ F, where E,F are continuous, invariant, ∠(E_x,F_x)≥ρ>0 for every x∈𝕋^2, and there are constants c>0 and λ∈(0,1) such that for any x∈𝕋^2,‖Dg^n(x)v_E‖≤cλ^n‖Dg^n(x)v_F‖,∀v_E∈E_x, v_F∈F_x, ‖v_E‖=‖v_F‖=1,∀n≥1. Let x∈𝕋^2. Since T_x𝕋^2=span{ v_E,v_F}, where v_E∈ E_x and v_F∈ F_x are unit vectors, we have by definition of determinant and the domination property that| Dg^n(x)| = ‖ Dg^n(x)v_E‖‖ Dg^n(x)v_F‖sin(∠(Dg^n(x)v_E,Dg^n(x)v_F‖))/sin(∠(v_E,v_F))≤ 1/ρ‖ Dg^n(x)v_E‖‖ Dg^n(x)v_F‖≤ c/ρλ^n‖ Dg^n(x)v_F‖^2< c/ρ‖ Dg^n(x)v_F‖^2,∀ n≥ 1.Hence, since | Dg^n(x)|≥ m_0^n for any n≥ 1,‖Dg^n(x)v_F‖≥Cm_1^n, ∀n≥1,where C=√(ρ/c) and m_1=√(m_0). This shows that the subbundle F expands exponentially along the forward orbit of every point of 𝕋^2. Now, by definition of Dĝ(·) we have, by the above estimation, that for every x̂∈ L_f with π_ext(x̂)=x there is N_1∈ℕ such that‖ Dĝ^-n(x̂)v_F‖≤ C^-1m_1^-n<1,∀ n≥ N_1.On the other hand, since C_χ(g)>β there is N_2∈ℕ such that 1/nI(x,v_F;g^n)=1/n∑_y∈g^-n(x)log‖(Dg^n(y))^-1v_F‖/m^n≥β_1,∀n≥N_2,where 0<β_1<β. Then, there is some y_n∈ f^-n(x) satisfying‖(Dg^n(y_n))^-1v_F‖≥(e^β_1)^n,∀n≥N_2, which is equivalent to‖ Dĝ^-n(x̂_0)v_F‖≥ (e^β_1)^n>1,∀ n≥ N_2,where x̂_0∈ L_f satisfies π_ext(x̂_0)=x and x_n=y_n for n≥ N_2. Therefore, if N=max{ N_1,N_2}, we obtain a contradiction by (<ref>) and (<ref>). This proves the result. Now, let consider a linear map E having ±1 as an eigenvalue. In this case, the remainder eigenvalue is m= E, because the determinant of a linear map is the product of its eigenvalues. In particular, E is diagonalizable and, up to a linear change of coordinates, it is written as E= ( [m k(m-1);01 ] ), m>2,where k∈ℕ is chosen in a such way that E no fixes e_2. Moreover,E^-1(x)=y+{(i/m,0) : i=0,…m-1},∀x∈𝕋^2,where y is the unique point in ℝ^2 satisfying Ey=x. Moreover, by Theorem 1 and Theorem 2 in <cit.>, it follows that the elementary divisors of E are τ_1=1 and τ_2=m. Since m≥ 3, by Theorem <ref> there are NUH elements in its homotopy class. Furthermore, those elements have the form f_t=E∘ h_t, t>0, where h_t(x)=(x,y+ts(x)), for every x=(x,y)∈𝕋^2 and s:𝕋^1→ℝ is an analytic function with constant derivatives on 𝒢. In addition, if we setδ=2t^-3/10 in the definition of critical region, we have for t>0 large enough thatk(m-1)| s'(x)|<4t^-3/10,∀x∈𝒞 and | s'(x)|≥t^-3/10/2,∀x∈𝒞∖𝒞_δ,where 𝒞_δ=([p-δ/2,p+δ/2]∪[q-δ/2,q+δ/2])×𝕋^1. In coordinates, Df_t=E∘ Dh_t can be written as Df_t(x)= ( [ m+k(m-1)ts'(x) k(m-1); ts'(x)1 ]),∀x=(x,y)∈𝕋^2.Note that | Df_t(x)|=m, for every x∈𝕋^2. In addition, there is C>1 such that for any t>0 and x∈𝕋^2,(Ct)^-1≤ m(Df_t(x))≤‖ Df_t(x)‖≤ Ctand‖ D^2f_t(x)‖≤ C^2t,where D^2g denotes the Hessian of g. Let δ_0∈(0,1). Consider m_0∈ℕ such that⌊m-1/2⌋-1/⌊m-1/2⌋+1>(1-δ_0),∀ m≥ m_0.The next two lemmas establish the existence of unstable manifolds with large size and good tangent direction estimates on a positive Lebesgue measure subset of 𝕋^2. Their proofs follow the ideas presented in <cit.>, which, in turn, build upon the proof of <cit.>, requiring an adaptation of Pliss Lemma. This is the main reason for the choice of large m in (<ref>). In our context, we make the necessary adaptations by resorting to the inverse limits space to obtain the precise estimates that we will employ.Let δ_0∈(0,1) small enough. Let E be a linear mapwhose elementary divisors are (1,m), where m satisfies (<ref>), and let f_t=E∘ h_t as above. Then, for t large enough there is a set Z_t⊂𝕋^2 of positive Lebesgue measure such that for every x∈ Z_t there exists a C^1 curve W_r^+(x) whose length are bounded bellow by r=t^-7. Moreover, T_yW_r^+(x)⊂Δ^h_4/θ_1∀y∈W_r^+, where θ_1=θ_1(t)∈(0,1).First, by <cit.>, we see that if m satisfies the condition (<ref>), one has min{χ_f_t^+(x),-χ_f_t^-(x)}>(1-δ_0)logt,μ-a.e. x∈𝕋^2.Thus, by (<ref>) there is a full μ̂-measure subset ℛ̂⊂ L_f such that the above relation holds on L_f, i.e.,min{χ_f̂_t^+(x̂),-χ_f̂_t^-(x̂)}>(1-δ_0)log t,μ̂-a.e. x̂∈ L_f. Now, since μ̂ is hyperbolicthere are at most countable many ergodic components for μ̂. For any ergodic component ν̂_i for μ̂ define the following set: Λ̂_i={x̂∈ℛ̂ : 1/n∑_j=0^n-1δ_f̂_t^±j(x̂)ν̂_i, in the weak* topology},where δ_z is the dirac measure on the point z. It is easy to check that Λ̂_i∩Λ̂_j=∅ if i≠ j. Let Λ̂_0=⋃_i∈ℕΛ̂_i. Notice that μ̂(Λ̂)=1. Define the setsẐ_i,t^+ = {x̂∈Λ̂_i : ‖ Df̂_t^-n(x̂)|_E_x̂^+‖<(t^-4/5)^n,∀ n> 0 }, Ẑ_i,t =f̂_t^-1(Ẑ_i,t^+).In this way, we set Ẑ_t=⋃_i∈ℕẐ_i,t.From definition of Df̂ and (<ref>), we follow step by step the argument given in pp. 16-17 of <cit.> to conclude that for t>0 large enough μ̂(Ẑ_t)≥1-7δ_0/1+7δ_0>0. So, since Ẑ_t⊂π_ext^-1(π_ext(Ẑ_t)), it follows that Z_t=π_ext(Ẑ_t) satisfies μ(Z_t)≥1-7δ_0/1+7δ_0>0.Now, from definition of Z_t one has that for every x∈ Z_t there is a pre-orbit x̂=(x,x_1,…,x_n,…)∈Ẑ_t and a unit vector v^+ such that π_ext(x̂)=x and(1/Ctm^1/2)^2n≤‖(Df_t^n(x_n-1))^-1v^+‖^2/|(Df_t^n(x_n-1))^-1|≤(m^1/2/t^4/5)^2n,∀n≥0.Let σ=t^-4/5, σ=(Ct)^-1, ρ=(m^1/2t^-4/5)^2 and ρ=(Cm^1/2t^4/5)^-2. Note that σ,σ,ρ,ρ∈(0,1). Furthermore, σ·ρ/σ·ρ=1/C^3t^-3/5>t^-4/5=σ,for t>0 large enough. Therefore, since f_t is a local diffeomorphism, by using (<ref>) and the above estimates, we follow step by step the argument given in the proof of Lemma 3.7 and Proposition 3.11 in <cit.> to obtain the C^1 curve W_r^+(x), for every x∈ Z_t, satisfying the desired properties with θ_1=θ_1(t)=t^-2/5, for t>0 large enough. Now, let consider the set Z_t given in the above lemma, and let T=⌊1+7δ_0/28δ_0⌋. For δ_0>0 small enough we have that T>20. In this case, define X_μ=⋂_j=0^T-1f_t^-j(Z_t). Since μ(Z_t)≥1-7δ/1+7δ_0, one has μ(𝕋^2∖ Z_t)≤14δ_0/1+7δ_0. Hence, μ(X_μ) = 1-μ(𝕋^2∖ X_μ) ≥ 1-∑_j=0^T-1μ(f_t^-j(𝕋^2∖ Z_t))≥ 1-T·14δ_0/1+7δ_0≥ 1/2>0. On the other hand, let θ_1 as in Lemma <ref>, and let θ_2=θ_2(t)=t^-3/5. Note that θ_2<θ_1<1. On the other hand, by definition of δ and 𝒞_δ, d(∂𝒢, ∂𝒞_δ/2)=t^-3/10>t^-7=r.By the estimations given in (<ref>) we have the following properties for the set Z_t:* Z_t⊂𝒢⊂𝕋^2∖𝒞_δ/2=𝒢'. * If x∈𝒢', thenDf_t(x)(Δ_4/θ_1^h)⊂Δ_θ_2^h.* If x∈𝒢' and u∈Δ^h_θ_2, then ‖ Df_t(x)u‖≥ t^1/2. In particular, for any C^1-curve γ⊂𝒢' tangent to Δ_θ_2^h satisfying ℓ_m(γ)≥ t^-3/10, one has ℓ_m(f_t(γ))>4.For x∈ X_μ, there are n∈ℕ and a C^1 curve γ_+⊂ f_t^n(W_r^+(x)) tangent to Δ_θ_2^h that it crosseshorizontally 𝕋^2.Since x∈ X_μ, one has by (a), x, f_t(x),…, f_t^T-1(x)∈𝒢⊂𝒢'. In this way, denote W_0^+=W_r^+(x) and W^+_k(x)=f_t^k(W_0^+) for k≥1. Note that since x∈ Z_t and r<d(∂𝒢,𝒢') we have that W_0^+⊂𝒢'. Then, by property (Z2), TW_1^+⊂Δ_θ_2^h. Hence, since ℓ_m(W_1^+)>ℓ_m(W_0^+) and f_t(x)∈𝒢, it follows that there is a connected component γ_1 of W_1^+ contained in 𝒢 such thatx_1=f_t(x)∈γ_1, it is tangent to Δ_θ_2^h and its length is t^-7.Now, letm_0=max{m∈ℕ : f_t^j(γ_1)⊂𝒢, ∀j=1,…, m}.By property (Z3), the definition of γ_1 and X_μ and the invariance of the cone Δ_θ_2^h on 𝒢' we see that the curve γ_j=f_t^j(γ_1), j≥ 1, satisfies ℓ_m(γ_j)≥ t^1/2ℓ_m(γ_1)=t^j-14/2, so that m_0≤ 14. Let m_0^+=m_0+1≤ 15<T-1. Then, f_t^m_0^+(x)∈𝒢⊂𝒢' and γ_m_0^+∩∂𝒢'≠∅. In particular, γ_m_0^+ also intersects ∂𝒢. Thus, there is a connected component γ̂_m_0^+ of γ_m_0^+ contained in 𝒢'∖𝒢 such that γ̂_m_0^+∩𝒢≠∅ and γ̂_m_0^+∩𝒢'≠∅, which implies that its length is at least t^-3/10, so that, by property (Z3), the C^1 curve γ_+=f_t(γ̂_m_0^+) satisfiesℓ_m(γ_+)>4. Moreover, γ_+⊂ W_m_0+3^+(x) and Tγ_+⊂Δ_θ_2^h. Finally, if γ_+(t)=(γ_1(t),γ_2(t)), t∈ I, we have that|γ_1'(t)|≥θ_2|γ_1'(t)|≥|γ_2'(t)| for every t∈ I, so that ‖γ'_+‖=|γ_1'(t)|. Therefore, the projection of γ^+ to the horizontal axis, denoted by π_h(γ^+), satisfies ℓ(π_h(γ^+))=∫_I|γ_1'(t)|dt=∫_I‖γ_+'(t)‖dt=ℓ_m(γ^+)>1,which proves the result.Finally, let Λ̂_0⊂ℛ̂ be as defined in the proof of Lemma <ref>. As in <cit.>, let Λ̂_1⊂ℛ be the set of points such that for every x̂∈Λ̂_1, there is a full Lebesgue measure subset B of Ŵ^u(x̂) such that B⊂Λ̂_0. By Lemma <ref> we have that μ̂(Λ̂_1)=1. Moreover, by definition of Λ̂_0, there is an ergodic component μ̂_0 of μ̂ such that B⊂ℬ(μ̂_0). In this case, we define Λ_j=π_ext(Λ̂_j), j=0,1. Notice that these sets are forward invariant and have full Lebesgue measure. Furthermore, if x∈Λ_1, there are x̂∈Λ̂_1 and an ergodic component μ̂_0 of μ̂ such that π_ext(x̂)=x and Lebesgue almost every point in W^u(x̂) belongs to ℬ(μ_0), where μ_0=(π_ext)_*μ̂_0. In <cit.> the authors introduce the notion of μ_0-regular su-rectangle for a endomorphism of class C^2 as a piecewise smooth simple closed curve in 𝕋^2, consisting of two pieces of local stable manifolds and two pieces of local unstable manifolds such that the last two pieces are contained in Λ_1, and almost every point in these pieces belongs to the basin of μ_0. In the aforementioned reference, they shown that when the Lebesgue measure is hyperbolic and almost every stable manifold has large diameter, these rectangles are open modulo zero subsets of 𝕋^2.§.§ Stable ergodicityNext, we are ready to prove Theorem B.First, notice that in the proof of non-uniform hyperbolicity presented in <cit.> the following estimate for C_χ(f_t) is obtained in our setting:C_χ(f_t)≥(⌊m-1/2⌋-1/⌊m-1/2⌋+1)logt+C_0,C_0∈ℝ. Let ε>0 small enough, and let t>0 and m≥ m_0, where m_0 is as in (<ref>), large enough satisfying (⌊m-1/2⌋-1/⌊m-1/2⌋+1)log t+C_0-ε>(⌊m_0-1/2⌋-1/⌊m_0-1/2⌋+1)log t.In this case, take f_t=E∘ h_t∈𝒰, where E has elementary divisors (1,m). Let consider the following properties satisfied by Dh_t and f_t:* For any x∈𝒢 we have* (Dh_t(x))^-1(Δ_α^h)⊂(Dh_t(x))^-1(Δ_α^h)⊂Δ_α^v,* Dh_t(x)(Δ_α^v)⊂Dh_t(x)(Δ_α^v)⊂Δ_α^h,* m((Dh_t(x))^-1),m(Dh_t(x))>at-α/α. * For any (x,u)∈ T^1𝕋^2, there are y∈𝒢 and w∈Δ_α^v such that f_t(y)=x and Df_t(y)w=u.* For any x∈𝕋^2, there is a pre-image y∈𝒢 with d(y,𝒞)>1/8(m+1). In this way, define 𝒱'={g=E∘h : h∈Diff^2(𝕋^2) is C^2-close to h_t and satisfies (1)}, and we choose 𝒱⊂𝒱' such that every g∈𝒱 satisfies properties (2) and (3) above. Observe that 𝒱 is a C^2 open subset of End^2(𝕋^2) containing f_t. Take g∈𝒲:=𝒰∩𝒱. Then, by the choice of α>1, the definition of 𝒲 and <cit.> one has that g is an area-preserving NUH endomorphism such that for any x∈𝕋^2 and every C^1 curve γ_x passing through x, there is N'∈ℕ, a pre-image y∈𝕋^2 and a C^1 curve γ'_x passing through y which crosses 𝕋^2 vertically such that g^N'(γ'_x)=γ_x. In particular, from <cit.> it follows that W_g^s(x) contains a v-segment for almost every point x∈𝕋^2.On the other hand, by shrinking 𝒲 if it is necessary, similar estimations to (<ref>) and properties (Z1)-(Z3) are obtained for any g∈𝒲. Moreover, we have by definition of C_χ(f_t), m_0 and relation (<ref>) thatC_χ(g)≥(⌊m-1/2⌋-1/⌊m-1/2⌋+1)logt+C-ε>(⌊m_0-1/2⌋-1/⌊m_0-1/2⌋+1)logt>(1-δ_0)logt,so that, by <cit.>, min{χ_g^+(x),-χ_g^-(x)}>(1-δ_0)log t for Lebesgue almost every point x∈𝕋^2. Therefore, Lemma <ref>, Lemma <ref>, Lemma <ref> and Lemma <ref> hold for every g∈𝒲. Define the sets X_g andΛ_1^g in an analogous way to that given for f_t.Assume that μ has at least two different ergodic components μ_0 and μ_1. On one hand, since Λ_1^g has full Lebesgue measure and X_g has positive Lebesgue measure, there is x_0∈ X_g∩Λ_1^g. Suppose that x∈ℬ(μ_0). Then, by Lemma <ref> and Lemma <ref> there are a C^1-curve γ_x containing x and a natural number n_0<20 such that γ^n_0=g^n_0(γ_x) crosses horizontally 𝕋^2. Moreover, by the topological characterization of the Pesin unstable mannifold we have W_r^+(x)⊂ W^u(x̂), where x̂∈Λ_1^g satisfies π_ext(x̂)=x, so that there is a full Lebesgue measure subset B of ℬ(μ_0) contained in γ^n_0. On the other hand, let R be a μ_1-regular su-rectangle R and take y∈ int(R)∩ℬ(μ_1). Then, there is a small C^1-curve γ_y⊂ R containing y. Hence, there exists N∈ℕ and a C^1-curve γ_y' crossing vertically 𝕋^2 such that g^N(γ_y')=γ_y. The Figure 1 helps visualize the argument up to this point.Therefore, as γ^n_0 and γ_y' cross horizontally and vertically 𝕋^2 respectively, there is a point z∈γ^n_0∩γ_y'. Hence, since g is a local diffeomorphism, we can conclude by the absolute continuity of the stable lamination (Lemma <ref>) that there is a positive Lebesgue measure subset R_0⊂ B(μ_0)∩ R, which is impossible. Thus, μ is ergodic for every g∈𝒲, which proves the result.10A M. Andersson, Transitivity of conservative toral endomorphisms, Nonlinearity 29 (2016), 1047–1055.ACS M. Andersson, P. Carrasco and R. Saghin, Non-uniformly hyperbolic endomorphisms, arXiv:2206.08295v2, 2022.AM A. Arbieto and C. Matheus, A pasting lemma and some applications for conservative systems, Ergodic Theory and Dynamical Systems 27 (2017), 1399–1417.BC P. Berger and P. Carrasco, Non-uniformly hyperbolic diffeomorphisms derived from the standard map, Comm. Math. Phys. 329 (2014), 239–262. BM J. Bochi, Genericity of zero Lyapunov exponents, Ergodic Theory and Dynamical Systems 22 (2002), 1667–-1696. BW K. Burns and A. Wilkinson, Stable ergodicity of skew products, Ann. scient. Éc. Norm. Sup.32 (1999), 859–889. C B. Chirikov, A universal instability of many-dimensional oscillator systems, Physics Reports 52 (1979), 263–379.CP18 S. Crovosier and E. R. Pujals, Strongly dissipative surface diffeomorphisms, Comment. Math. Helv.93 (2018), 377–400. DP D. Dolgopyat and Y. Pesin, Every Compact Manifold Carries a Completely Hyperbolic Diffeomorphism, Ergodic Theory and Dynamical Systems 22 (2002), 409–-435.MPS M. Grayson, C. Pugh, and M. Shub, Stably Ergodic Diffeomorphisms. Annals of Mathematics 140 (1994), 295–-329.J V. Janeiro, Existence of non-uniformly hyperbolic endomorphisms in homotopy classes, J. Dyn. Control Syst., https://doi.org/10.1007/s10883-023-09668-8, 2023.K A. Katok, Bernoulli Diffeomorphisms on Surfaces, Annals of Mathematics,110 (1979), 529–-547.L P.-D. Liu, Invariant Measures Satisfying an Equality Relating Entropy, Folding Entropy and Negative Lyapunov Exponents, Communications in Mathematical Physics 284 (2008), 391–406.LS P.-D. Liu and L. Shu, Absolute continuity of hyperbolic invariant measures for endomorphisms, Nonlinearity 24 (2011), 1595–1611. M E. Mihailescu, Physical Measures for Multivalued Inverse Iterates, J. Stat. Phys. 139 (2010), 800–-819.NORH G. Nuñez, D. Obata and J. Rodriguez Hertz, New examples of stably ergodic diffeomorphisms in dimension 3, Nonlinearity 34 (2021), 1352–1365.Oba2018 D. Obata, On the stable ergodicity of Berger–Carrasco’s example, Ergodic Theory and Dynamical Systems 40 (2018), 1008–10056. O V. I. Oseledec, A multiplicative ergodic theorem. Characteristic Ljapunov, exponents dynamical systems, Trudy Moskov. Mat. Obšč 19 (1968), 179–210.P Y. Pesin, Charateristic Lyapunov exponents and smooth ergodic theory, Rusian Mathematical Surveys 32 (1977), 55–114. PS C. Pugh and M. Shub, Stably ergodic dynamical systems and partial hyperbolicity, J. of Complexity 13 (1997), 125–179. QZ M. Qian and S. Zhu, SRB measures and Pesin's Entropy Formula for Endomorphisms, Transactions of the American Mathematical Society 312 (2002), 1453–1471. R J. J. Rushanan, Eigenvalues and the Smith normal form, Linear Algebra and its Applications 216 (1995), 177–184.T A. Tazhibi, Stably ergodic diffeomorphisms which are not partially hyperbolic, Israel Journal of Mathematics 142 (2004), 315–-344. VY M. Viana,J. Yang, Continuity of Lyapunov exponents in the C^0 topology, Israel J. Math., 229 (2019), 461–485. | http://arxiv.org/abs/2312.16742v1 | {
"authors": [
"Sebastián Ramírez",
"Kendry J. Vivas"
],
"categories": [
"math.DS"
],
"primary_category": "math.DS",
"published": "20231227230742",
"title": "Non-uniform hyperbolicity of maps on $\\mathbb{T}^2$"
} |
Instituto de Astrofísica, Depto. de Ciencias Físicas, Facultad de Ciencias Exactas, Universidad Andres Bello, Fernandez Concha 700, Santiago, RM, Chile [email protected] Specola Vaticana, Vatican Observatory, Castelgandolfo, V00120, Stato Citta Vaticano, ItalyDepartamento de Fisica, Universidade Federal de Santa Catarina, Florianopolis, Trindade 88040-900, SC, BrazilDepartment of Astronomy, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan [email protected] of Infrared High-resolution Spectroscopy (LiH), Koyama Astronomical Observatory, Kyoto Sangyo University, Motoyama, Kamigamo, Kita-ku, Kyoto 603-8555, JapanInstituto de Astronomía, Universidad Católica del Norte, Av. Angamos 0610, Antofagasta, ChileNational Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, JapanPhotocoding, 460-102 Iwakura-Nakamachi, Sakyo-ku, Kyoto 606-0025, JapanDepartment of Astrophysics and Atmospheric Sciences, Faculty of Science, Kyoto Sangyo University, Motoyama, Kamigamo, Kita-ku, Kyoto 603-8555, JapanCentre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield, AL10 9A,United KingdomInstitute of Astronomy, University of Cambridge, Madingley Rd., Cambridge, CB3 0HA,United KingdomCentro de Astronomia (CITEVA), Universidad de Antofagasta, Av. Angamos 601, Antofagasta,ChileINAF, Osservatorio Astronomico di Roma, Via di Frascati 33, Monteporzio Catone, 00040, ItalyThe Galactic centre is hazardous for stellar clusters because of the strong tidal force. Supposedly, many clusters were destroyed and contributed stars to the crowded stellar field of the bulge and the nuclear stellar cluster.However, it is hard to develop a realistic model to predict the long-term evolution of the complex inner Galaxy, and observing surviving clusters in the central region would provide crucial insights into destruction processes.Among hitherto-known Galactic globular clusters, VVV CL002 is the closest to the centre, 0.4 kpc, but has a very high transverse velocity, 400 km s^-1. The nature of this cluster and its impact on Galactic astronomy need to be addressed with spectroscopic follow-up. Here we report the first measurements of its radial velocity and chemical abundance based on near-infrared high-resolution spectroscopy. We found that this cluster has a counterrotating orbit constrained within 1.0 kpc of the centre, as close as 0.2 kpc at the perigalacticon, confirming that the cluster is not a passerby from the halo but a genuine survivor enduring the harsh conditions of the Galactic mill's tidal forces. In addition, its metallicity and α abundance ([α/Fe] ≃ +0.4 and [Fe/H]=-0.54) are similar to some globular clusters in the bulge. Recent studies suggest that stars with such α-enhanced stars were more common at 3–6 kpc from the centre around 10 Gyrs ago. We infer that VVV CL002 was formed outside but is currently falling down to the centre, exhibiting a real-time event that must have occurred to many clusters a long time ago.The globular cluster VVV CL002 falling down to the hazardous Galactic centre Dante Minniti 1,2,3Noriyuki Matsunaga 4,5José G. Fernández-Trincado 6 Shogo Otsubo 5 Yuki Sarugaku 5 Tomomi Takeuchi 5 Haruki Katoh 5 Satoshi Hamano 7 Yuji Ikeda 5,8 Hideyo Kawakita 5,9 Philip W. Lucas 10 Leigh C. Smith 11 Ilaria Petralia 1 Elisa Rita Garro 1 Roberto K. Saito 3 Javier Alonso-García 12 Matías Gómez 1 María Gabriela Navarro 13 Received Month DD, Year; accepted Month DD, Year ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Whereas more than 200 globular clusters have been found today in the Galaxy, there is plenty of evidence for clusters having been destroyed by various evolutionary and dynamical processes,including dynamical friction, shocking by disk and bulge, tidal disruption, and so on <cit.>. Many of these dynamical processes are stronger in the deep potential well of the inner Galaxy. Numerical simulations have revealed that the supermassive black hole Sagittarius A* <cit.>is a very efficient machine of grinding clusters, where globular clusters could be rapidly demolished <cit.>. In order to address the process of globular cluster disruption, it is crucial to understand not only clusters that have been destroyed but also surviving globular clusters.Search for globular clusters in the inner Galaxy has beenincomplete because of large interstellar extinction and heavy source crowding, Still, recent near-infrared surveys have revealed dozens of candidate clusters in the inner bulge <cit.>. Then, confirmation of member stars and detailed characterization of the clusters' properties require infrared spectroscopic observations due to largeinterstellar extinction near the centre. VVV CL002 is a relatively low-luminosity globular cluster (M_K=-7.1 mag, M_V=-4.6 mag <cit.>), which was discovered at 1.1 deg from the Galactic centre through the VISTA Variables in the Via Lactea (VVV) survey <cit.>. The distance measured with the red clump places this clusterclosest to the centre, 0.4 kpc, among known globular clusters, and an additional surprise is the high transverse velocity, 400 km ^-1, inferred from the proper motion <cit.>. A part of RR Lyr variables with such high velocities found inthe bulge turned out to be halo objects <cit.>. Is VVV CL002 an interloper from the halo passing near the Galactic centre? If it remains near the centre instead, we need to consider how it is surviving without being tidally disrupted. Spectroscopic observations are required to answer these questions. § OBSERVATION AND DATA ANALYSIS§.§ Target selection Embedded in the crowded stellar field, selecting good targets for follow-up spectroscopic observations is a crucial step. In particular, the cluster membership determination is very tricky in such a crowded field (Fig. <ref>). The line-of-sight contamination of stars in various groups increase at low latitudes, and accurate 6D information (position, radial velocity and proper motion) is vital in judging membership. Based on the VVV's photometry and proper motions in the updated VIRAC2 database (Smith et al., in prep)we selected a few candidate members of red giant (Fig. <ref>) including the two stars that we actually observed.Therefore we initially selected the four brightest RGB stars with high probability of membership according to the photometric and astrometric data. The main criteria for the selection of this sample were: 1) Stars within 0.1 mag from the mean RGB ridge line in the near-IR color-magnitude diagram. 2) Stars with PMs within 0.1 mas/yr from the mean GC PM. 3) Stars that appeared unblended in the optical and near-IR images. 4) Stars brighter than J = 14.2 mag. Two out of these four main targets selected were successfully observed (Table 1). The remaining two targets could not be observed due to time and weather constraints during our observing run. These are Gaia ID 9327562004074, with Ks=11.87 mag, J-Ks=2.22 mag; andGaia ID 9327562025638 , with Ks=11.90 mag, J-Ks=2.24 mag, which would also be prime targets for future spectroscopic observations. The proper motions of candidate members of VVV CL002 show a large offset from the bulk of bulge field stars, which indicates the large transverse velocityof the clusters, ∼400 km s^-1 <cit.>. Although some contaminants remain, stars selected by the proper motion show the red giant branch and red clump expected for a globular cluster. §.§ ObservationOn June 10th, 2023, we usedthe WINERED spectrograph <cit.> attached to the Magellan Clay telescope in Chile to obtain high-resolution spectra of two stars. WINERED is a near-IR high-resolution spectrograph covering 0.90–1.35 μm (z^', Y, and J bands), with a resolution of R = λ / Δλ = 28,000 with the WIDE mode (Ikeda et al. 2022). The raw spectral data were reduced with the WINERED Automatic Reduction Pipeline (WARP[https://github.com/SatoshiHamano/WARP/], version 3.8). We confirmed that the broadening width in the final spectra is as small as 12 km s^-1, which can be explained by the combination of the instrumental resolution (10.7 km s^-1) and a typical macroturbulence of about 5 km s^-1.These stars are the brightest among a few targets that were selected as candidate members. The two stars are the brightest, J ≃ 13.5 mag, of the targets we selected, but they are rather faint targets for high-resolution spectroscopy in the infrared.Our spectra with 1200s exposures for each star are of moderate quality. While the signal-to-noise ratios are S/N = 30–50 in the J band, they are 15–20 in the Y band (shorter wavelengths)because of severe interstellar extinction. Therefore, our analysis relied mainly on the J-band part of the spectra (Fig. <ref>). The spectra of the two stars show a striking resemblance to each other (Fig. <ref>), with the same absorption lines appearing at the same wavelengths and exhibiting very similar depths. This already indicates that the two target stars are red giants with very similar characteristics, belonging to a common stellar group, VVV CL002. §.§ Measurements of radial velocitiesWe measured radial velocities using the cross correlation technique involving the model synthetic spectra and telluric absorption spectra <cit.>. For this analysis, we include some Y-band echelle orders together with J-band orders, in which telluric lines and stellar lines are well mixed.We obtained almost identical velocities for the two stars (Table <ref>), strongly supporting the membership to VVV CL002 combined with the proper motions (Fig. <ref>). §.§ Measurements of chemical abundances In spite of the limited S/N, we are able to measure the abundances of Fe, Mg, and Si. First, we estimated the effective temperature employing the line-depth ratios <cit.>. We could measure the ratios of 6–7 line pairs taken from Taniguchi et al. (2018) as illustrated in Fig. <ref>. Adopting an error of 200 K, we use the approximate value of the temperatures,T_eff=4100 K, for both stars in the following analysis. It is impossible to determine other stellar parameters due to the limited quality of the current spectra. Considering the similarity to Arcturus, we used the following parameters: log g=1.5, and the microturbulent velocity ξ=1.5 km s^-1 <cit.>.Then, we determined the chemical abundancesby fitting model spectra to individual absorption lines that were isolated from telluric lines and other strong stellar lines.The measurements were successfully done for about 10 Fe I lines, 7 Si lines, and 4 Mg lines, leading tothe abundances listed in Table <ref>. The errors in the [X/H] values of individual stars are given by the combination of the standard deviations of the line-by-line abundances and the systematic errors caused by uncertainties in stellar parameters such as T_eff. The line-by-line random errors dominate the total errors in all cases, i.e., the three elements in the two stars.Among the systematic errors, the errors from the log g uncertainty tends to be the largest error source, 0.6–0.9 dex in [X/H], but the Teff uncertainty has the highest impact in [Si/H], ∼0.1 dex. The trends of the systematic errors are similar to what was found for Arcturus (Fig. 7 in Fukue et al. 2021), but the errors in each stellar parameter, especially logg, are significantly larger for our targets.Finally, we took weighted means and their errorsof the [X/H] values of the two stars to discuss the chemical abundances of the cluster (Table <ref>). In fact, we obtained almost identical velocitiesand common chemical abundances within the errors for the two stars (Table <ref>). Such close agreements are unexpected for field stars in the bulge characterized by a large velocity dispersionand a wide metallicity distribution <cit.>. In addition, the spectra of these two stars are similar to the high S/N spectrum of the prototype red giant Arcturus ([Mg/H]=-0.2, [Si/H]=-0.2, [Fe/H]=-0.5 <cit.>), supporting the metallicities and the α enhancement of our two stars. There are absorption lines of some other elements seen in the obtained spectra, as indicated in Fig. 1. It would be valuable to measure the abundances of such elements, but we limited ourselves to the three elements (Fe, Si, Mg) for which simple analysis with the limited-S/N spectra was sufficient. For example, the number of useful Ca lines is more limited than Mg because of blends by telluric lines and stellar lines and also because only a few lines have moderate depths appropriate with the limited S/N values. Ti tends to be affected by NLTE. Measuring C and/or N abundances with CN would require CO and OH lines together, but none of these lines are available in the WINERED range. § CLUSTER ORBITWe computed the orbit using the 3D barred Galaxy steady-state potential model of the GravPot16 code <cit.>.We ran the orbital simulation considering different bar pattern speeds, Ω_bar = 31, 41, and 51 km s^-1 kpc^-1 <cit.>.For VVV CL002, we obtained 10,000 orbits adopting a simple Monte Carlo re-sampling, where the uncertaintiesin the input coordinates (α, δ), proper motions, radial velocities, and distance errors were randomly propagated as 1 σ variations in the Gaussian Monte Carlo re-sampling. Fig. <ref> shows the computed orbits using Ω_bar = 41 km s^-1 kpc^-1, displayed as probability densities of orbits projected on the equatorial Galactic plane (left panel) and height above the plane z vs Galactocentric radiusin kpc (right panel). The lighter colours indicate more probable regions of space, that are travelled more frequently by the simulated orbits.According to the result of this calculation, VVV CL002 has a retrograde orbital configuration of relatively high eccentricity (e=0.69 ± 0.22), with perigalactocentric and apogalactocentric distances well inside the Galactic bulge at R_peri = 0.19 ± 0.21 kpc and R_apo = 1.04 ± 0.29 kpc, respectively, and with moderate vertical excursions from the Galactic plane (|Z_max| = 0.35 ± 0.09 kpc). It is important to note that any adopted heliocentric distance, our simulations confirm that kinematically VVV CL002 now belongs to the bulge, instead of being a halo globular cluster in a very eccentric orbit that is merely passing by the inner bulge. § DISCUSSION The radial velocity we obtained allows to give a strong constraint to the full 3D motion of the cluster. We calculated 10,000 orbits of VVV CL002, adopting a Monte Carlo re-sampling with the uncertainties in input parameters such as distance, radial velocity and proper motion taken into account. We found that VVV CL002 has a retrograde orbital configuration of relatively high eccentricity (e=0.69 ± 0.22), with perigalactocentric and apogalactocentric distances well inside the bulge, R_peri = 0.19 kpc and R_apo = 1.04 kpc (Fig. <ref>).The retrograde motion is not unique, as there exist a few other retrograde globular clusters enduring the harsh density of the Galactic bulge <cit.>.However, the orbit of VVV CL002 is tighter than the orbits of all other known globular clusters <cit.>.No globular cluster is expected to survive over its lifetime (>10 Gyr) in such proximity to the Galactic centre <cit.>. In order to unveil the mysterious history of this cluster, its chemical abundance plays an essential role.The abundances of the two stars agree within the errors, giving an estimate of the cluster's metallicity, [Fe/H]=-0.54 ± 0.27, consistent with a previous photometric determination ([Fe/H]=-0.4, <cit.>), and [α/Fe] ≃ +0.4 (Table <ref>).The high α enhancement is connected to rapid chemical evolution dominated by core-collapse supernovae rather than type Ia supernovae <cit.>. This confirms that VVV CL002 is an old globular clusterformed together with other clusters and field stars present today in the Galactic bulge (Fig. <ref>), rather thana younger open cluster or the remains from an (already-disrupted) dwarf galaxy <cit.>. Furthermore, using a state-of-the-art technique for estimating stellar birth radii (R_birth) within the Galaxy <cit.>,recent studies demonstrated that stars with relatively low metallicity (among bulge stars) and high α enhancement were formed more in the outer part, 3–6 kpc in R_birth <cit.>. This brings us to a scenario where VVV CL002 was formed at a relatively large R_birth and started to fall towards the centre recently.It is probably doomed to continue spiralling into the inner parsecs and be destroyed in the not-so-distant future. This cluster sheds light on the intriguing survival and migration mechanisms of globular clusters, whereas many less-characterized globular clusters and candidatesare within a couple of kilo-parsecs from the centre. Demand is high for near-infrared high-resolution spectroscopy of such clusters, which has been handicappeddue to severe interstellar extinction.This paper is based on the WINERED data gathered with the 6.5 meter Magellan Telescope located at Las Campanas Observatory, Chile. This research is supported by JSPS Bilateral Program Number JPJSBP120239909.The observing run in 2023 June was partly supported by KAKENHI(grant No 18H01248). We also acknowledge Scarlet S. Elgueta and Rogelio R. Albarracin for supporting the observations.WINERED was developed by the University of Tokyo and the Laboratory of Infrared High-resolution Spectroscopy, Kyoto Sangyo University, under the financial support of KAKENHI (Nos. 16684001, 20340042, and 21840052) and the MEXT Supported Program for the Strategic Research Foundation at Private Universities (Nos. S0801061 and S1411028). We gratefully acknowledge the use of data from the ESO Public Survey program IDs 179.B-2002 and 198.B-2004 taken with the VISTA telescope and data products from the Cambridge Astronomical Survey Unit. D.M. acknowledges support by the ANID BASAL projects ACE210002 and FB210003, by Fondecyt Project No. 1220724, and by CNPq/Brazil through project 350104/2022-0.J.G.F.-T. acknowledges support provided by Agencia Nacional de Investigación y Desarrollo de Chile (ANID) under the Proyecto Fondecyt Iniciación 2022 Agreement No. 11220340, and from the Joint Committee ESO-Government of Chile 2021 under the Agreement No. ORP 023/2021, and from Becas Santander Movilidad Internacional Profesores 2022, Banco Santander Chile. R.K.S. acknowledges support from CNPq/Brazil through projects 308298/2022-5 and 350104/2022-0. E.R.G. acknowledges support from ANID PhD scholarship No. 21210330. aa | http://arxiv.org/abs/2312.16028v1 | {
"authors": [
"D. Minniti",
"N. Matsunaga",
"J. G. Fernandez-Trincado",
"S. Otsubo",
"Y. Sarugaku",
"T. Takeuchi",
"H. Katoh",
"S. Hamano",
"Y. Ikeda",
"H. Kawakita",
"P. W. Lucas",
"L. C. Smith",
"I. Petralia",
"E. R. Garro",
"R. K. Saito",
"J. Alonso-Garcia",
"M. Gomez",
"M. G. Navarro"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231226123406",
"title": "The globular cluster VVV CL002 falling down to the hazardous Galactic centre"
} |
UTF8 gbsn [ 0000-0002-9970-3034E. L. Gagnon*, 1, 2, 0000-0003-3312-909XD. Anbajagane1, 2, 0000-0002-5933-5150J. Prat1, 2,0000-0002-7887-0896C. Chang1, 2,0000-0003-4079-3263J. Frieman1, 2, 3January 14, 2024 ========================================================================================================================================================================================== tocequationsection particles/.style=dashed, postaction=decorate, decoration=markings,mark=at position .5 with [scale=1.5]> particle/.style=draw=black, postaction=decorate, decoration=markings,mark=at position .5 with [scale=1.1]> .6cm § FROM TOY MODELS TO THE REAL WORLD VIA NUMERATORS AND ZEROS Over the last few decades, a rich combinatorial and geometric structure underlying scattering amplitudes has been revealed. These descriptions have been most successful in the context of theories, such as planar N = 4 super-Yang-Mills (SYM) <cit.> and the Tr(ϕ^3) theory for colored scalars <cit.>, for which the amplitudes are relatively simple. Speaking most invariantly, these are theories for which the amplitudes can be entirely determined by their long-distance singularities, for instance, their factorization properties on massless poles at tree level. Loosely speaking, the combinatorics and geometry provide an alternate understanding of the rich and intricate pattern of denominators of the amplitude, which turn out to non-trivially determine the entire amplitude as well.But as we turn towards describing much more interesting and physically relevant theories, such as the non-supersymmetric gauge interactions of the Standard Model, we must incorporate a qualitatively new feature present in the amplitudes. At the most basic and concrete level, there are numerator factors associated with the more interesting interactions vertices of the realistic Lagrangians. Of course even N = 4 SYM has such vertices, so the more precise and invariant statement is that more interesting and realistic theories have new “poles at infinity”, not associated simply with massless factorizations, which must be incorporated in any new combinatorial/geometric formulation of this physics. Such poles are absent in planar N=4 SYM (as one consequence of the famous hidden “dual conformal invariance” of the theory <cit.>), as well as in Tr(ϕ^3) theory (a consequence of a cousin hidden “projective invariance” of the amplitudes <cit.>). Poles at infinity are naturally associated with non-trivial numerator factors, whose presence and purpose in life must be exposed in the next phase of the adventure of connecting combinatorial geometry to the real world. The most obvious place to search for new structure associated with numerators is to understand whether these give rise to interesting patterns of zeros of the amplitude – so this is where we start.We will begin by studying the simplest theory of colored scalars, Tr(ϕ^3) theory, which has been much studied recently from the new perspectives of tropical geometry, u-variables and surfacehedra <cit.>. This may appear to be an odd starting point since the Tr(ϕ^3) theory is precisely the most “overly simple” theory with no numerator factors in its amplitude! And yet as we will see, even this seemingly boring theory does have a surprising and rich pattern of amplitude zeroes, and what is more, this pattern extends to much more interesting and realistic theories of pions and gluons, revealing a striking hidden unity between these three theories, which as we will show are in a precise sense “contained” in each other.The presence of these hidden zeroes, at least for the Tr(ϕ^3) theory, is not at all manifest in the diagrammatic expansion for the amplitude, but is made obvious by the understanding of the Tr(ϕ^3) amplitudes as the so-called “canonical form” of the so-called ABHY associahedron polytope, which we review in section <ref>.The zeros are connected with the fact that ABHY associahedron is built out of simpler lower-dimensional objects – it is a Minkowski sum of simplices. As we will see in section <ref>, by turning off a sufficient number of such building blocks the polytope collapses and the amplitude vanishes. This geometric picture also tells us that in the last step before the polytope collapses, it takes the form of a “sandwich”, with an interval separating opposite facets of the associahedron. This implies that the amplitude factorizes into lower point amplitudes, in a completely predictable fashion. It is fascinating to discover a new sort of factorization of amplitudes, which has nothing to do with the usual factorization on poles, but instead characterizes the behavior of amplitudes as we approach the hidden zeros. Of course the behavior of amplitudes near poles is perhaps the best studied aspect of the physics of particle scattering. By stark contrast, the kinematic locus where amplitudes vanish has hardly been explored, and a clear interpretation of our zeros in familiar physical terms is still lacking. Indeed both the zeros and factorization near zeros are properties of the whole amplitude, not features manifest from the Feynman diagram perspective; they are instead made manifest by the alternative geometric description of the amplitudes provided by the associahedron. Amazingly, we will find that exactly the same patterns of zeros and an avatar of the factorization near zeros generalizes to all interesting theories of colored particles: the Non-linear Sigma Model (NLSM) for pions, as well as gluons in Yang-Mills theory (YM). In sections <ref> and <ref>, we explain how this generalization works. The universality of these zeros seen in other colored theories is especially surprising given that no obvious associahedron-ic formulation for these theories is known.However, as we will see, there is a beautiful reason for the universality of these zeros, which turn out to be deeply related to a surprising relation between these colored theories revealed upon understanding a unified“stringy” descriptions of all these amplitudes. These stringy generalizations inherit all the zeros and factorization patterns of the field theory amplitudes and in fact generalize them to infinite new families of zero/factorization patterns. They also allow us to see that amplitudes for colored scalars, pions, and gluons are all given by a single function, expanded about different points in the kinematic space. This remarkable connection between Tr(ϕ^3) scalars, pions, and gluons will be explored at length in sections <ref> and <ref>. Our goal in this paper is to explain the hidden zero/factorization patterns in the simplest possible setting and use this to motivate the new descriptions for amplitudes of pions and gluons arising from a simple kinematic shift of the Tr(ϕ^3) theory. To keep the discussion as simple as possible, for the story of zeros and factorization we will focus on tree-level amplitudes; this will already be enough to suggest the kinematic shift relating all the colored theories, that naturally generalizes to all loop orders. With this impetus as a starting point, we will take up a detailed description of both the Non-linear Sigma Model and Yang-Mills amplitudes from this point of view in upcoming works <cit.>. § TR(Φ^3) THEORY AND THE ASSOCIAHEDRON In this section we will review the associahedron construction presented in <cit.>, and explain in detail the pattern of zeros and factorizations for Tr(ϕ^3) theory that is made obvious by this construction. Henceforth we will focus on tree-level amplitudes. However, since there is an analogous Minkowski sum picture for polytopes describing loop integrands <cit.>, we expect these observations to generalize at loop level. The theory we are interested in is a theory of colored massless scalars interacting via a cubic interaction, described by the following Lagrangian:ℒ_Tr(ϕ^3)= Tr(∂ϕ)^2+ g Tr(ϕ^3),where ϕ is an N× N matrix.Since this is a scalar theory, the amplitude is exclusively a function of the Lorentz invariant dot products of momenta: p_i · p_j. There are n(n-1)/2 of these invariants, however momentum conservation gives us n relations between them via ∑_j p_i · p_j = -p_i^2 = 0, so we have n(n-1)/2 - n independent invariants [Note that our discussion holds for general arbitrarily large spacetime dimension D; in any fixed spacetime dimension there are further “gram determinant” constraints on the dot products p_i · p_j for n>D particles.]. But there is no canonical way of imposing the momentum conservation constraints, and no cyclically invariant choice of the p_i · p_j that form a basis. This is perhaps not a surprise, since there is nothing invariantly special about dot products between pairs of momenta; many different linear combinations of these objects can also be considered, so it is no surprise that a canonical basis for the invariants does not exist ab initio; a better basis should reflect the exigencies of the physics we are trying to describe. Indeed, there is a much nicer basis for the kinematic invariants that is directly tailored to our physical problem. Let us consider a fixed color ordering, which we can take to be without loss of generality (1,2, ⋯, n). We can keep track of the momenta of the particles in a familiar way by drawing each p_i^μ as an edge of the “momentum polygon” (see figure <ref> for the 6-point momentum polygon). We use the color ordering to order the momenta in the polygon one after the other in the same way, (p_1^μ, p_2^μ, ⋯, p_n^μ). The fact that the polygon closes reflects momentum conservation ∑_i p_i^μ = 0. The vertices of this polygon can be associated with points x_i^μ so that p_i^μ = (x_i+1^μ - x_i^μ), which makes momentum conservation manifest. Now consider any chord in this polygon connecting the vertices (i,j), and consider the squared X_i,j = (x_i - x_j)^2.The X_ij are naturally associated with the propagators that appear in Feynman diagrams for this color ordering: X_i,j = (p_i + … +p_j-1)^2. The tree amplitude is exclusively a function of these variables 𝒜^Tr(ϕ^3)_n ≡𝒜^Tr(ϕ^3)_n({X_i,j}).Note that X_i,i+1 = p_i^2 = 0 is not a dynamical variable, but all the rest of the X_i,j are independent, and there are exactly n(n-1)/2 - n of them. Hence the X_i,j's give a complete basis for all the kinematic invariants. This basis is much nicer than the one provided by dot products: the X_i,j are what appears directly in the amplitudes, the basis respects the cyclic symmetry of the amplitudes,and all the momentum conservation conditions are automatically taken into account: specifying an unconstrained set of X_i,j fixes the kinematical data for our scattering process. Since the X_i,j are a basis, we can express all the other kinematic invariants in terms of them, including the dot product we began with. If we define c_i,j = -2 p_i · p_j,the relation is simplyc_i,j = X_i,j + X_i+1,j+1 - X_i,j+1 - X_i+1,j.§.§ The kinematic mesh One particularly nice way of encoding the kinematic data of the scattering process is using the kinematic mesh <cit.>. As we will see, the mesh is not only useful for organizing the momentum invariants but will also be crucial for understanding the features of the amplitude we will be studying throughout this paper. The guiding principle used to build the mesh is equation (<ref>). We associate each X_i,j in (<ref>) to the vertex of a square rotated by a 45^∘ angle (see figure <ref> on the left), and the corresponding c_i,j to the square. By placing these squares together we form a square grid tilted 45^∘, that in the boundaries contains X_i,i+1 = p_i^2 =0, since we are assuming massless particles. In figure <ref>, we present the mesh for the case of a 6-point process. We can see that all the planar variables, X_i,j, are associated with grid points. Meanwhile the non-planar dot product of momenta–which are the c_i,j with i, j non-adjacent indices– correspond to the square tiles. The mesh extends for infinitely long but reflects the cyclic symmetry of the problem by an interesting “Mobius” symmetry, where we identify X_i,j = X_j,i and c_i,j=c_j,i. Due to the structure of (<ref>), if we consider any causal diamond inside the mesh, the relation satisfied by the four X's in the vertices of the causal diamond is exactly that of (<ref>), where on the l.h.s. instead of c_i,j, we have the sum of all the c_i,j's inside the causal diamond under consideration (see figure <ref> on the right). We have already seen that the X_i,j's form a basis for all kinematic invariants. However, one can also build a basis in which we trade some of the planar variables for non-planar ones. In particular, we are interested in the case where we get a basis including the planar variables X_i^⋆,j^⋆ of a particular triangulation, 𝒯 together with a set of non-planar Mandelstams, c_i^⋆,j^⋆. One way to find such a basis is by considering a minimal subregion of the mesh that includes all the planar variables, once and only once. All such subregions are in one-to-one correspondence with a triangulation of the n-gon, such that the basis we are interested in is formed by the X_i,j's of this triangulation and the c_i,j's inside this region. To obtain this subregion we do the following: start by picking a triangulation, 𝒯, of the n-gon which is determining the set of n-3 chords, X_i,j entering in the basis, with (i,j)∈𝒯. Now consider the “rotated triangulation”, formed by chords X_i-1,j-1 with (i,j) ∈𝒯. Then consider the region of the mesh that does not contain the meshes c_i-1,j-1 with (ij) ∈𝒯. Since the mesh is infinite this will still be an infinite set of connected or disconnected regions, from which we further extract a finite subset that contains all the planar Mandelstams, once and only once – this is the desired subregion. The respective kinematic basis we are interested in is constructed from the non-planar c_i,j's inside the subregion together with n-3 X_i,j's in the starting triangulation, 𝒯. In figure <ref> we present the regions of the mesh corresponding to the three different types of triangulation of the 6-point problem. All remaining triangulations correspond to cyclic shifts of the ones presented, and these cyclic shifts only translate the region in the mesh vertically, without altering its shape.We denote the triangulations like the one on the left of figure <ref> by ray-like triangulations. The kinematic basis associated with this region is {X_1,3,X_1,4,X_1,5,c_1,3,c_1,4,c_1,5,c_2,4, c_2,5,c_3,5}. Such triangulations always lead to simple connected triangular regions in the mesh. As we will see, this choice of basis makes most of the interesting features manifest and therefore will be the preferred choice of kinematic basis henceforth. Now that we have properly defined and organized the kinematical data that the amplitude depends on, let us proceed to study the associahedron and understand how it is defined in kinematic space.§.§ The ABHY associahedron As defined in <cit.>, the ABHY associahedron associated with an n-point amplitude is an (n-3)–dimensional polytope. Its embedding in kinematic space goes as follows: * Define the region in kinematic space for which all the planar variables are positive – Δ_+ = {X_i,j >0}, for all i<j ∈{1,…,n}.* Pick a subregion of the mesh determining a basis of n-3 planar variables, X̃_i,j, and (n-2)(n-3)/2 non-planar, c̃_i,j, and solve for all the X_i,j's in terms of this basis. * Impose that all c̃_i,j>0, so that, in this new basis, Δ_+ defines a set of n(n-3)/2 inequalities in an (n-3)–dimensional space spanned by the X̃_i,j. The convex-hull of these inequalities is the ABHY associahedron.From the previous procedure, we see that there are different ways of embedding this polytope in kinematic space, each of them corresponding to a different choice of basis. These different choices give rise to different realizations of the polytope, however, any statement about the amplitude should be realization-independent.Let us now see what the polytope looks like for a few simple examples. §.§.§ 4-point At 4-point we only have two different planar variables, X_1,3 and X_2,4, corresponding, respectively, to the Madelstams s and t. These are related to the non-planar variable, c_1,3 = -u, in the following way: X_1,3 + X_2,4 = c_1,3. So picking the basis {X_1,3,c_1,3}, the ABHY associahedron is: {X_1,3 >0∧X_2,4>0 ⇔ X_1,3<c_1,3}⇔ 0<X_1,3<c_1,3,which is simply a line segment – a one-dimensional simplex. This is still a very small example, so to understand how it works in a less trivial case it is worth going to 5 points. §.§.§ 5-point At 5-point, there are five different planar variables. However, there is only one possible type of triangulation – the ray-like triangulation. Let us pick the basis corresponding to the analogous of the 6-point ray-like triangulation discussed in the previous section, i.e. {X_1,3,X_1,4,c_1,3,c_1,4,c_2,4}. Then the associahedron is defined in (X_1,3,X_1,4) space as follows:X_1,3>0 X_1,4>0 X_2,4>0 ⇔ c_1,3 -X_1,3+X_1,4>0 X_2,5>0 ⇔ c_1,3 +c_1,4 -X_1,3>0 X_3,5>0 ⇔ c_1,4 +c_2,4 -X_1,4>0 ,which is exactly given by the pentagon presented in figure <ref> (left). In section <ref> we mentioned that the ABHY associahedron is given by the Minkowski sum of simplices, which is fulcral for the understanding of the zeros of the amplitude. At 4-point, the polytope is the one-dimensional simplex, however, at 5-point this decomposition is not so obvious. Let us now understand the decomposition of the polytope into its Minkowski summands.§.§ The Minkowski summands and the meshIt turns out that each Minkowski summand is naturally associated with a mesh, c_i,j. Let us start by analyzing the 5-point example. The set of inequalities in (<ref>) carves out a pentagon in (X_1,3,X_1,4) space, in which each edge corresponds to an inequality X_i,j>0, and thus is uniquely associated to that planar Mandelstam(see figure <ref>, left). In addition, we see that the location of the edges of the pentagon is set by the values of the non-planar variables {c_1,3,c_1,4,c_2,4}. Therefore we can study what happens when we set some of these non-planar variables to zero (see figure <ref>): * c_1,3=c_1,4 =0: The pentagon collapses to a horizontal line segment, i.e. a one-dimensional simplex. This is then the Minkowski summand corresponding to mesh c_2,4. * c_1,3=c_2,4 =0: The pentagon collapses to triangle, i.e. a two-dimensional simplex. This is then the Minkowski summand corresponding to mesh c_1,4.* c_1,4=c_2,4 =0: The pentagon collapses to a horizontal line segment, again a one-dimensional simplex. This is then the Minkowski summand corresponding to mesh c_1,3.So Minkowski summing these three simplices associated with the three different non-planar variables, builds back the full pentagon. The fact that each Minkowski summand is associated with a non-planar variable, we can keep track of Minkowski summands using the mesh, just like it is shown in figure <ref> (right), where we present only the subregion of the mesh corresponding to the basis choice being used. We see that the lower-dimensional dimensional simplices are on the meshes on the left while the top-dimensional one is associated with the right-most corner. This pattern will persist at n-point as long as we are dealing with a basis associated with a ray-light triangulation: the meshes on the left-boundary will correspond to one-dimensional simplices and, as we move towards the right, the dimension of the simplices increases until it becomes top-dimensional in the right-most corner.This is a good point to highlight that, had we chosen a different basis, the pentagon would look different: it would be embedded in a different space, (X̃_i_1,j_1,X̃_i_2,j_2), and the non-planar variables involved would be different. In particular, at higher points, different types of basis triangulation lead to different Minkowski summands. § ZEROS AND FACTORIZATIONS OF TR(Φ^3) TREE AMPLITUDES§.§ Zeros and factorizations – two simple examples Now that we have understood how the ABHY associahedron is defined in terms of its Minkowski summands, let us proceed to the study of the zeros of the amplitude. To do this we will start by studying in detail two simple examples: the 5-point and 6-point amplitudes.§.§.§ 5-point amplitudeAt 5-point, the amplitude is given by the sum of five different Feynman diagrams, corresponding to the five possible triangulations of the pentagon:𝒜_5 = 1/X_1,3X_1,4 +1/X_2,4X_2,5 +1/X_1,3X_3,5+1/X_1,4X_2,4+1/X_2,5X_3,5 . So to determine the kinematic locus where the amplitude vanishes, we can reduce (<ref>) to a common denominator and ask for the numerator to vanish. By doing this we get a cubic equation that obscures any possible simple zeros of the amplitude.However, we can recast the question about the zeros locus of the amplitude in polytopal language. As it is explained in <cit.>, the amplitude is the canonical form of the ABHY associahedron, and therefore if we are able to make the polytope collapse one in dimension, the amplitude will vanish.Looking back at the 5-point associahedron presented in figure <ref>, we see that by setting c_1,3=c_1,4=0 or c_1,4=c_2,4=0, the pentagon collapses into a line segment, which then means that the amplitude vanishes in this limit! Note that the same is no longer true for the case c_1,3=c_2,4=0, since in this limit we still get a top-dimensional object, and thus the amplitude does not vanish.Now by the cyclic invariance of the 5-point amplitude, any cyclic images of these conditions also make the amplitude vanish. Therefore, we get the simple family of zeros: pick an i∈{1,…,5}, the amplitude vanishes in the locus c_i,j=0, for all j (non-adjacent to i). Even though the realization of the associahedron associated to the basis {X_1,3,X_1,4,c_1,3,c_1,4,c_2,4} does not make manifest all these zeros, for each zero we can always find a realization in which it is manifest, as we will show in section <ref>.Even though this example is still relatively simple it illustrates how the Minkowski sum picture of the associahedron justifies the presence of this simple class of zeros: Knowing that the non-planar variables are associated with individual summands that build up the full polytope then, by turning off enough of them, we can make the polytope collapse in dimension, and thus make the amplitude vanish.Let us now focus on the zero c_1,3=c_1,4=0, and understand what happens in the penultimate step before the polytope collapses, i.e. when we only set c_1,3=0, or c_1,4=0. In the latter case, c_1,4=0, the two remaining Minkowski summands are two intervals, and so the polytope reduces to a square/rectangle (see figure <ref>, bottom). In particular, starting with the full pentagon, we can see that by setting c_1,4=0, we lose the edge associated with X_2,4. One way to explain this fact is that since:X_1,4+X_2,5-X_2,4 = c_1,4 X_2,4= X_1,4+X_2,5. So the condition X_2,4>0, becomes redundant, since it is automatically satisfied for X_1,4>0 and X_2,5>0, and thus the corresponding facet disappears. Therefore, in this limit, the amplitude only depends on {X_1,3,X_1,4,X_2,5,X_3,5}. In addition, from section <ref>, we know that the 4-point ABHY associahedron is simply a line segment, so the fact that the geometry reduces to a product of two line segments hints that in this limit the amplitude turns into a product of two 4-point amplitudes. Indeed, by starting with the 5-point amplitude and imposing c_1,4=0, we obtain:𝒜_5(X_1,3,X_1,4,X_2,4,X_2,5,X_3,5) ( 1/X_1,3 + 1/X_2,5) ×( 1/X_1,4 + 1/X_3,5) ,which is indeed the product of two 4-point amplitudes with some interesting kinematics, 𝒜_4(X_1,3,X_2,5) and 𝒜_4(X_1,4,X_3,5). Let us now try to understand how we can read off this behavior from the kinematic mesh. We are currently exploiting the behavior near the zero associated with setting c_1,3=c_1,4=0, which corresponds to a 45^∘ titled rectangle in the bottom of the triangular mesh. The remaining part of the mesh, is exactly that of a 4-point problem. By setting only c_1,4 to zero, one of the 4-point factors we get exactly corresponds to this 4-point amplitude, with the kinematics entering the bottom diagonal depending on which c_i,j we don't set to zero.Before understanding the kinematic dependence, let us look at the other 4-point factor. This term is associated with the X's at the bottom and the top of the causal diamond associated with the zero. Using the c-equation for the full causal diamond between X_1,3 and X_2,5, we have that c_1,3 = X_1,3+X_2,5, and so we can rewrite (<ref>) as:𝒜_5(X_1,3,X_1,4,X_2,4,X_2,5,X_3,5) c_1,3/X_1,3X_2,5×( 1/X_1,4 + 1/X_3,5),so this factor is there to make manifest that the amplitude vanishes if we further set c_1,3=0. As we will see, for a generic n-point amplitude factorization, we always have such a factor that exactly ties to the zero we are exploiting. Let us now understand the kinematic dependence of the lower 4-point amplitude. Say that, instead, we had set c_1,3=0. In this limit, the pentagon reduces to a trapezoid (see figure <ref>, top), as we lose the edge corresponding to X_1,4. Similarly to the previous case, in this limit, the amplitude factorizes into a product of two 4-point amplitudes as follows:𝒜_5(X_1,3,X_1,4,X_2,4,X_2,5,X_3,5)( 1/X_1,3 + 1/X_2,5) ×( 1/X_2,4 + 1/X_3,5) = c_1,4/X_1,3X_2,5×( 1/X_2,4 + 1/X_3,5), so the first factor remains the same, since we are still probing the same zero, while the second factor is now 𝒜_4(X_2,4,X_3,5). So the lower point amplitude no longer depends on X_1,4, as this edge of the polytope is now lost, and instead depends on X_2,4 which survives in this limit.In summary, we understand that by turning on different c_i,j inside the zero causal diamond the factorization pattern does not change, however, the kinematic variables entering the lower point amplitudes do change. This is because by turning on different c_i,j the facets of the polytope that survive in the limit are different, or, in other words, the set of inequalities X_i,j>0 that become redundant depends on the c_i,j that we turn on. We will explain the general pattern in which the kinematics are inherited in the lower point amplitudes in section <ref>.The discussion in this section is summarized pictorially in figure <ref>.§.§.§ 6-point amplitude The six-particle amplitude is a sum of 14 terms, corresponding to the Feynman diagrams in three cyclic classes: A_6 = (1/X_1,3 X_1,4 X_1,5 + 1/X_1,3 X_3,6 X_4,6 +cyclic) + 1/X_1,3 X_3,5 X_1,5 + 1/X_2,4 X_4,6 X_2,6. Let us now see how the zeros and factorizations work, where we see the most generic behavior seen for all n, with cyclically inequivalent classes for patterns of zeros. We begin with the full ABHY associahedron, shown in the left panel of figure <ref>. For some orientation, note that the facets lying on the co-ordinate planes X_1,3, X_1,4, X_1,5→ 0 are the pentagons (X_1,3→ 0 and X_1,5→ 0) and the square (X_1,4→ 0) as expected from the associated A_5 × A_3 and A_4 × A_4 factorizations. Note that we also have facets parallel to these, on the opposite side of the polytope. For instance, the facet parallel to the X_1,3 plane corresponds to X_2,6→ 0. This is obvious from the mesh picture since we have X_2,6 = C - X_1,3 where C=c_1,3 + c_1,4 + c_1,5; this is the top corner of the maximal causal with X_1,3 on the bottom. In the same way the facet parallel to X_1,4 is X_3,6 and the one parallel to X_1,5 is X_4,6. This is a general feature of the ABHY associahedron: there are pairs of facets parallel to the coordinate planes that “look the same”, corresponding to the poles associated with the bottom and top boundaries of the mesh picture. Let us begin with the analog of what we saw already at 5 points, the “skinny rectangle” zero. If we set c_1,3,c_1,4,c_1,5→ 0, then the three-dimensional associahedron collapses in the X_1,3 direction down to a two-dimensional pentagon. We can see this visually in figure <ref> (a more detailed figure <ref> appears at the end of the paper), where we have represented the penultimate step in shutting off c's, where c_1,3,c_1,5→ 0 but c_1,4 is still turned on. At this point, we have a“sandwich”, with the X_1,3 facet and its parallel cousin X_2,6 facet, separated by an interval. When we further shut off c_1,4 the interval shrinks to zero and we are left with the pentagon, showing that the amplitude vanishes in this limit.We can also look at shutting off c_1,4,c_1,5,c_2,4,c_2,5 where the associahedron collapses to a square. This is also shown in figure <ref>, where we have again shown the penultimate step where we have turned c_2,4 back on. We again have a “sandwich” with X_1,4 and opposite X_3,6 facets, separated by an interval. The interval shrinks to zero when c_2,4→ 0; the associahedron collapses to a square and the amplitude vanishes. Just as at five points, in the penultimate step before the associahedron collapses, the amplitude factorizes, as we can see explicitly: A_6 (1/X_1,3 + 1/X_2,6) ×(1/X_2,4 X_1,5 + 1/X_1,5 X_3,5 + 1/X_3,5 X_3,6 + 1/X_3,6 X_4,6 + 1/X_4,6 X_2,4) ,A_6 (1/X_1,4 + 1/X_3,6) ×(1/X_1,3 + 1/X_2,6) ×(1/X_1,5 + 1/X_4,6).In the top line, we see the factor (1/X_1,3 + 1/X_2,6), which vanishes when we further set c_1,4→ 0, since X_1,3 + X_2,6 = c_1,3 + c_1,4 + c_1,5→ 0 when we set c_1,4→ 0. It multiplies a five-particle amplitude, but with some interestingly redefined kinematic variables. If we look at the picture of the associahedron in this limit, the two bounding facets of the “sandwich” are precisely X_1,3 and X_2,6, while the facets keeping X_1,3, X_2,6 can be read off going around the pentagons as X_2,4, X_1,5,X_3,5,X_3,6,X_4,6, which precisely defines the kinematics for an effective five-particle amplitude given in the second factor. The same story holds for the “big square” zero/factorization. The first factor (1/X_1,4 + 1/X_3,6) vanishes when c_2,4→ 0 since X_1,3 + X_2,6 = c_1,4 + c_1,5 + c_2,4 + c_2,5→ 0.At finite c_2,4 the facets in between the X_1,4 and X_3,6 facets are X_1,3,X_1,5,X_2,6,X_4,6. This is exactly the direct product of two effective 4-point problems with variables (X_1,3,X_2,6) and (X_4,6,X_1,5), which appear in the 4-point amplitude factors. In figure <ref>, we summarize the patterns of zeros found for the 5-point and 6-point amplitudes using the kinematic mesh. We can see that all the zeros patterns are causal diamonds that extend to the boundaries of the mesh picture. §.§ Zeros and factorizations – general statementWe now present the general statement about the zeros and factorization of n-point tree-level Tr(ϕ^3) amplitudes.As we will see the motivation and proof of these statements follow easily from simple properties of the associahedron.In later sections, we will give a different proof, beginning from the stringy integral representation of these amplitudes, that will generalize the statements beyond the field theory limit to full string amplitudes.Zeros Consider an n-point tree-level amplitude in Tr(ϕ^3) theory. Draw the corresponding n-point kinematic mesh. Pick a point in the mesh, i.e. a planar variable X_B, and consider the causal diamond anchored in this variable: follow the two light rays starting at X_B, let them bounce in the boundaries of the mesh, and meet again in some other point, X_T. This encloses a region – a causal diamond. Setting all the c_i,j inside this causal diamond to zero will make the amplitude vanish (see figure <ref>).Factorizations Let us consider turning back on one of the c's inside the zero causal diamond, c_⋆≠ 0. Then the amplitude factorizes into the product of lower point amplitudes in the following way (see figure <ref>):𝒜_n(c_⋆≠ 0)= (1/X_B+ 1/X_T) ×𝒜^down×𝒜^up. Now the kinematic dependence of 𝒜^down and 𝒜^up can be read off from the kinematic mesh, and is summarized in figure <ref>. The question is what are the kinematic variables that enter in the upper/lower part of the down/up amplitudes. These are precisely those of the facets of the associahedron that are not lost in this kinematic limit.Let us start by looking at 𝒜^down. We see that for any X lying beneath the c_⋆, we can build a rectangle with X on the left vertex, X_i,i+1=0 in the boundary on the right vertex, X_B on the bottom vertex, and X̃ on the top vertex, where X̃ lives on the upper boundary of the mesh. In this limit all the c_i,j inside this rectangle are zero, and so using the c-equation we can write:X = X_B + X̃. Just like we saw in the 5-point example, this then means that the inequality X>0 becomes redundant, and so the facet of the polytope associated with X disappears. Consequently, the variable that survives and enters the lower point amplitude is X̃. Exactly in the same way all the X's above c_⋆ for 𝒜^up, disappear and instead the X̃ from the bottom boundary of the mesh are the ones that enter 𝒜^up.Finally, for all the X's lying above c_⋆ in 𝒜^down, if we try to build the same rectangle, now it will include c_⋆≠ 0, and the argument does not hold anymore. However, we can now build a rectangle connecting X to X̃, X_T and a X_i,i+1 in the left boundary of the mesh. Inside this rectangle all the c_i,j=0, and so we get: X̃ = X_T + X,which tells us that for this region the facets associated with X̃ disappear and the ones with X survive. Exactly the same argument tells us that the X's beneath c_⋆ for 𝒜^up survive in this limit and thus appear in 𝒜^up. By now it should be clear that both the zeros and the factorization are properties of the amplitudes of Tr(ϕ^3) theory that are fundamentally hidden in the Feynman diagram formulation of these objects. It is instead the underlying geometry that gives us access to them and even suggests looking for them in the first place.As we will see shortly, these properties extend immediately to full string amplitudes. Before getting to that, we will first go through an even more magical fact – that these generalize to the Non-linear Sigma Model and Yang-Mills theory.This generalization is surprising because there is no known geometrical formulation for these theories. Ultimately, the reason for the emergence of these properties will be manifest when we see that these theories are secretly simple deformations of each other. These deformations are defined at the level of stringy formulations of the amplitudes that we go over in section <ref>.§ THE NON-LINEAR SIGMA MODELWe were motivated to look for and predict zeros of the Tr(ϕ^3) amplitude from the Minkowski sum picture of the associahedron. But in the end, the zeros are associated with a simple locus in the space of non-planar Mandelstam invariants. As such it is natural to wonder whether other colored theories, which have the same notion of color-ordering and planarity/non-planarity, may also have such zeros. Clearly random theories will not have these zeros, for instance, the amplitude for Tr(ϕ^4) is simply a constant at four points and does not have our zero. But this is not an especially “nice” theory. Perhaps the most natural theory to examine is the Non-linear Sigma Model for pions, which already does have famous Adler zeros associated with soft limits. Of course on the surface the Tr(ϕ^3) theory and the NLSM could not appear more different: Tr(ϕ^3) theory has no derivatives in its interactions, and has non-vanishing amplitudes for all multiplicity; by contrast, pions are derivatively coupled and only have non-vanishing amplitudes for an even number of particles! Nonetheless, in this section we will see experimentally that the NLSM amplitudes have zeros in precisely the same locus of kinematics we uncovered for the Tr(ϕ^3) theory. The scattering of massless pions can be described by the U(N) Non-Linear Sigma Model, and we record the Lagrangian in Cayley parametrization (c.f. <cit.>):ℒ_NLSM=1/8 λ^2Tr(∂_μU^†∂^μU),with U=(𝕀+λΦ)(𝕀-λΦ)^-1,where Φ=ϕ_I T^I, with T^I the generators of U(N) flavor group and λ is the coupling constant. It is straightforward to derive color-ordered Feynman rules, e.g. for Tr(1,2,⋯, n), and the vertex with two derivatives for any even multiplicity 2m is given byV_2m=-λ^2m-2/2∑_r=0^m-1∑_a=1^2m p_a · p_a+2 r+1 . NLSM tree amplitudes have been studied in e.g. <cit.>. It is well known that odd-point NLSM amplitudes vanish and even-point amplitudes have the Adler zero <cit.>, i.e. A_2n^NLSM∼𝒪(τ) when any external momentum becomes soft, p_i^μ=τp̂_i^μ with τ→ 0. Hereafter, we will absorb the coupling constant λ by defining: 𝒜_2n^NLSM≡λ^2-n A_2n^NLSM, therefore, e.g. the 4-point amplitude reads 𝒜_4^NLSM=X_1,3+X_2,4. And the 6-point result is𝒜_6^NLSM = (X_1,3+X_2,4)(X_1,5+X_4,6)/X_1,4 - X_1,3 -X_2,4 + (cyclic,i→ i+2) . §.§ Zeros and the soft limitSurprisingly, all the zeros that we described for Tr(ϕ^3) theory are also zeros of NLSM tree-level amplitudes. So starting with a mesh describing 2n-kinematics, by picking any causal diamond like the one in figure <ref> and setting all the c_i,j's inside it to zero, the NLSM amplitude vanishes.As explained previously, depending on the X_B we pick to build the zero causal diamond, the shape of the diamond, and in particular, the codimension of the zero changes. The smallest codimension zeros are the skinny rectangle ones, where we pick an index i, and set all c_i,j=0, for all the j not adjacent to i. It is well known that pion amplitudes have the Adler zero, i.e. vanish when one particle is soft. Note however that the zero we are presenting is not a soft limit, instead it is stronger, in the sense that the fact that the amplitude vanishes in this zero implies that it has the Adler zero. Let us look concretely at the 4-point amplitude. In this case, the zero would be c_1,3=0, and so p_1 · p_3 =0, which, however does not require any particle to be soft. Indeed we have that the 4-point pion amplitude is:𝒜_4^NLSM = X_1,3 + X_2,4 = c_1,3𝒜_4^NLSM =0,and so vanishes as expected. Of course, for this zero to imply the Adler zero, it is crucial that the amplitude does not have poles when p_i^μ→ 0, which is true for the NLSM since we always have even-point interactions. The same is not true for Tr(ϕ^3) theory, and this is why the zero in this context does not imply the vanishing in the soft limit.At last, it is worth highlighting that, despite the fact that the skinny rectangle zero is somehow related to the Adler zero, the same is not true for the other codimension zeros that we predict from the mesh picture. Just like in the Tr(ϕ^3) case, there is no clear physical explanation for the presence of these general families of zeros that are there for colored scalars and pions. §.§ FactorizationsLet us now understand how the statement about factorization near zeros generalizes to pions. For pion scattering, we always start with a 2n kinematical mesh and, for a particular codimension zero, i.e. a particular zero causal diamond, we can ask what happens if we turn on one of the c's inside the zero causal diamond, c_⋆≠ 0.It turns out, that if the lower point amplitudes, 𝒜^down and 𝒜^up, are both even-point amplitudes then the factorization holds exactly in the same way we described for Tr(ϕ^3):𝒜_2n^NLSM(c_⋆≠ 0)= (1/X_B+ 1/X_T) ×𝒜^down,NLSM×𝒜^up,NLSM,where the kinematic dependence of 𝒜^down,NLSM and 𝒜^up,NLSM are determined exactly in the way we explained for Tr(ϕ^3) (according to figure <ref>). However, when the factorization pattern produces odd-point amplitudes we have something new. As we know there are no odd-point amplitudes for pions, so instead the amplitudes that enter in (<ref>) are amplitudes in the mixed theory of pions and scalars <cit.>:𝒜_2n^NLSM(c_⋆≠ 0)= (X_B+ X_T) ×𝒜^down,NLSM+ϕ^3×𝒜^up,NLSM+ϕ^3. In addition, we see that the prefactor involving X_B and X_T also changed: instead of the 4-point scalar amplitude, we have the 4-point NLSM amplitude. This change is due to the fact that mixed amplitudes have different units than NLSM amplitudes. But note that the prefactor is still such that makes manifest that the amplitude vanishes if we further set c_⋆=0. Now the kinematic dependence of 𝒜^down,NLSM+ϕ^3 and 𝒜^up,NLSM+ϕ^3 are once more determined by the kinematic shifts described for Tr(ϕ^3), however, we still need to specify the configuration of π's and ϕ's entering these amplitudes. It turns out that this depends on the choice of c_⋆ that is set to non-zero, but it will always be the case that for a 2n-1 amplitude, we will always have 3 ϕ's and 2n-4 π's.To describe the rule let us start by considering a 6-point pion amplitude and consider the factorization that leads to a 5-point amplitude, corresponding to turning on a c_i,j inside a skinny rectangle (see figure <ref>, left). The skinny rectangle can either be on the top of the mesh or at the bottom. Let us start by looking at the case in which it is on the top, like the one highlighted in blue in figure <ref>. Then by turning on the top c, we get the amplitude in which the three ϕ's are the last three particles, at five points this then means we have two π's and three ϕ's. For a general 2n-1 mixed amplitude, we would have 2n-4 π's followed by 3 ϕ's. Now as we go down the skinny rectangle turning on different c_i,j, the pattern is that the first ϕ next to the π's starts moving past them, once at a time. Since there are 2n-3 meshes in the skinny rectangle, once we reach the point to turn on the c_i,j corresponding to the right-most mesh, the ϕ has now passed the full string of π's so that we have 𝒜^NLSM+ϕ^3(ϕ, π, π, ϕ, ϕ), at 5-point case (figure <ref>, left), and 𝒜^NLSM+ϕ^3(ϕ, π,…,π, ϕ, ϕ), for the general 2n-1 mixed amplitude.Let us now continue by looking at the bottom skinny rectangle. Starting from the right-most mesh we have 𝒜^NLSM+ϕ^3(ϕ, π,…,π, ϕ, ϕ), and going down is going to make the ϕ to the right of the string of π's move through them, just like in the previous case. So that when we reach the left-most bottom mesh, we get 𝒜^NLSM+ϕ^3(ϕ, ϕ, π,…,π, ϕ). The way this generalizes for the factorization involving a general codimension zero is summarized in figure <ref> (right). By picking some c_⋆≠ 0 inside a given causal diamond, the ϕ, π configuration in 𝒜_2n-1^down,NLSM+ϕ^3 and 𝒜_2m-1^up,NLSM+ϕ^3 is exactly the same as the ones we would have obtained considering a factorization from a skinny rectangle in a 2n, 2m problem, respectively, where the c_⋆ occupies the same relative position in these lower problems, as it does in the bigger one (see figure <ref>, right).§.§.§ ExamplesLet us now see this factorization in action for the case of the 6-point pion amplitude.We start by looking at the factorization associated with the square causal diamond, for which the lower point amplitudes are 4-point amplitudes, and thus should be NLSM amplitudes. Consider the zero associated with setting {c_1,4,c_2,4,c_1,5,c_2,5} to zero, and let us look at the factorization we get by turning on c_2,4. From the kinematic shifts, we expect X_3,5 and X_2,4 to be fixed, and so the bottom 4-point amplitude will be a function of (X_1,3, X_2,6, c_1,3), while the top 4-point will depend on (X_1,5, X_4,6, c_3,5). In this kinematic limit, the amplitude becomes:𝒜_6(c_1,4=c_1,5=c_2,5=0) = c_1,3 c_2,4 c_3,5/X_1,4 (X_1,4-c_2,4) = (1/X_1,4+1/X_3,6)· c_1,3· c_3,5= (1/X_1,4+1/X_3,6)·(X_1,3+X_2,6)_𝒜_4^down,NLSM·(X_1,5+X_4,6)_𝒜_4^up,NLSM,which follows exactly the form predicted in (<ref>). Let us now consider the factorization into the 5-point mixed amplitude. The skinny rectangle we will consider will be the usual one containing {c_1,3,c_1,4,c_1,5}: * Turn on c_1,5: According to the kinematic shifts, we should get a 5-point amplitude depending on (X_2,4, X_2,5, X_3,5, X_3,6, X_4,6, c_2,4, c_2,5, c_3,5). In this limit, the amplitude becomes: 𝒜_6(c_1,3=c_1,4=0) = c_1,5·(c_3,5/X_3,6+c_2,4/X_2,5+1)= (X_1,3+X_2,6) ·(X_3,5+X_4,6/X_3,6+X_2,4+X_3,5/X_2,5-1)_𝒜_5^NLSM + ϕ^3(ϕ,π,π,ϕ,ϕ). * Turn on c_1,4: According to the kinematic shifts, we should get a 5-point amplitude depending on (X_2,4, X_1,5, X_3,5, X_3,6, X_4,6, c_2,4, c_2,5, c_3,5). In this limit, the amplitude becomes: 𝒜_6(c_1,3=c_1,5=0) = c_1,4·c_3,5/X_3,6= (X_1,3+X_2,6) ·(X_3,5+X_4,6/X_3,6-1)_𝒜_5^NLSM + ϕ^3(ϕ,π,ϕ,π,ϕ). * Turn on c_1,3: According to the kinematic shifts, we should get a 5-point amplitude depending on (X_1,4, X_1,5, X_3,5, X_3,6, X_4,6, c_2,4, c_2,5, c_3,5). In this limit, the amplitude becomes: 𝒜_6(c_1,4=c_1,5=0) = c_1,3·(c_3,5/X_3,6 +c_3,5+c_2,5/X_1,4)= (X_1,3+X_2,6) ·(X_3,5+X_4,6/X_3,6+ X_4,6+X_1,5/X_1,4-1)_𝒜_5^NLSM +ϕ^3(ϕ,ϕ,π,π,ϕ).§ YANG-MILLS THEORY Finally, let us look at the last and most important colored theory: pure Yang-Mills theory. In this case, we are describing massless spin 1 particles and therefore the amplitudes have a crucial new ingredient: the polarizations of the gluons, ϵ_i. So we have that 𝒜^YM≡𝒜^YM (p_i · p_j, ϵ_i · p_j, ϵ_i ·ϵ_j).Due to this new feature, the statement about the zeros requires a generalization to involve a statement about polarization vectors as well. The most natural extension is as follows:Gluon Zeros Consider an n-point tree-level amplitude in YM theory. Draw the corresponding n-point kinematic mesh. Draw a causal diamond just like the one described for Tr(ϕ^3).Setting all the c_i,j inside this causal diamond to zero as well as all ϵ_i · p_j, ϵ_j · p_i and ϵ_i ·ϵ_j will make the amplitude vanish. Note that setting ϵ_i ·ϵ_j =0 is not a gauge-invariant statement, unless we also have ϵ_i · p_j = ϵ_j · p_i =0. Therefore the zero condition is well-defined and physically meaningful. For example, if we set not only c_1,3=-2 p_1· p_3=0 but also ϵ_1 ·ϵ_3=ϵ_1· p_3=ϵ_3 · p_1=0, the 4-gluon amplitude vanishes. Similarly, we can have 4(n-3) zero conditions associated with “skinny rectangle” and other causal diamonds such as the 16 conditions with (i,j)=(1,4), (1,5), (2,4), (2,5) which makes the 6-gluon amplitude vanishes. Now the question about factorization is more subtle. In particular there is only one inner product that we can turn back on that does not mess the gauge invariance of the other conditions: setting c_i,j≠ 0 makes ϵ· p =0 meaningless, ϵ_i · p_j ≠ 0 makes ϵ_i ·ϵ_j =0 meaningless. So the only well-defined thing to do is setϵ_i ·ϵ_j ≠ 0. Still doing so it is unclear the meaning of the factor multiplying ϵ_i·ϵ_j obtained in this limit.For this reason, at this stage, we do not have any generalization of the factorizations found for the previous theories. Such a generalization will only appear once we find a formulation of Yang-Mills theory that connects it to the other colored theories. Of course, while the connection between Tr(ϕ^3) and the NLSM is surprising, at least these are both theories of scalars, so it is even more surprising to connect Tr(ϕ^3) with Yang-Mills–for instance where do the polarization vectors come from? As we will see in section <ref>, this new description of the n gluon amplitudes will actually begin with a theory of 2n colored scalars, which will arise from a shift of the stringy Tr(ϕ^3) amplitudes for 2n scalars. Factorizing on poles where n pairs of these scalars fuse to produce gluons then gives us general n-gluon amplitudes. This formulation will make the zeros and factorizations present in this theory manifest. § STRINGY TR(Φ^3) In this section, we generalize the zeros and factorizations of Tr(ϕ^3) tree amplitude to the corresponding string amplitude: the so-called stringy integrals for ABHY associahedron <cit.>, which also represents a tree-level example of the so-called “binary geometry” <cit.>.These string amplitudes are in fact n-point generalizations <cit.> of theVeneziano amplitude <cit.>, known as dual resonance model in the early days of string theory (see <cit.> for a review), and more recently they arise as a natural basis for n-gluon tree amplitudes in type-I superstring theory (see <cit.>). The n-point amplitude is given by an integral over the moduli-space of real points z_1,…,z_n on the boundary of the disk:ℐ^Tr(ϕ^3)_n(1,2,...,n) = ∫_D(1… n)d z_1 …d z_n/vol SL(2, ℝ)1/z_1,2 z_2,3… z_n,1_PT(1,2,⋯,n)×∏_i<j z_i,j^2 α^' p_i · p_j_Koba-Nielsen factor,where the SL(2, ℝ) redundancy allows one to fix three punctures and the integration domain is the positive part of the real moduli space, M_0,n^+, or z_1<z_2<…<z_n (with 3 of them fixed); here z_i,j:=z_j -z_i >0 for i<j, and the integrand is given by the Parke-Taylor factor, PT(1,2,⋯, n) times the universal Koba-Nielsen factor (we have omitted the overall prefactor α^'n-3). The low energy limit, where α^' X_i,j≪ 1, yields the Tr(ϕ^3) tree amplitudes, and low-energy, α'-expansion of these integrals have been extensively studied in the literature (the so-called Z-theory <cit.>). To translate this to stringy integrals for binary geometries, we introduce u variables (one for each chord (i,j)), which are SL(2,ℝ) invariant cross-ratios defined as follows:u_i,j = z_i-1,j z_i,j-1/z_i,j z_i-1,j-1.In terms of u's, the Koba-Nielsen factor becomes ∏_i,j u_i,j^α' X_i,j with planar variables X_i,j = (p_i + p_i+1 + ...+p_j-1)^2 (with X_i,i+1=0). Note that there are n(n-3)/2 u variables, which satisfy u equations <cit.> (see earlier works e.g. <cit.>):u_i,j+ ∏_(k,l) cross (i,j) u_k,l=1 ,such that any u_i,j→ 0 all incompatible u_k,l→ 1 (the chord (k,l) intersects with (i,j)), hence the name “binary geometry". The ordering of z_i is equivalent to requiring all u variables to be positive (which implies that 0<u_i,j<1), thus we have the (n-3)-dimensional positive binary geometry, U_n^+∼ M_0,n^+, which has the shape of a (curvy) associahedron <cit.>. As shown in <cit.>, the Parke-Taylor factor PT(1,2,⋯, n) with the measure is nothing but the canonical form of this space, which we denote as Ω(U_n^+), thus we have ℐ^Tr(ϕ^3)_n(1,2,...,n) = ∫_U_n^+Ω(U_n^+)∏_i<j u_i,j^α^' X_i,j. To be more explicit, one can choose any positive parametrization of U_n^+, and a convenient one inspired by first fix the SL(2,ℝ) by choosing a gauge fixing e.g. z_1=0,z_n-1=1,z_n=∞) and change the remaining z variables to the positive y variables asz_2=y_1,3⋯ y_1,n-2/1+y_1,n-2+⋯+y_1,n-2 y_1,n-3⋯ y_1,3 , z_3-z_2=y_1,4⋯ y_1,n-2/1+y_1,n-2+⋯+y_1,n-2 y_1,n-3⋯ y_1,3 ,⋯,1-z_n-2=1/1+y_1,n-2+⋯+y_1,n-2 y_1,n-3⋯ y_1,3 . After the variable transformation, the integral becomesℐ^Tr(ϕ^3)_n(1,2,...,n)=∫_ℝ_>0^n-3∏_i=3^n-1 y_1,i/y_1,i y_1,i^α' X_1,i∏_1≤ i,j<n F_i,j (y)^-α' c_i,j,where the F-polynomials for our ray-like triangulation, F_i,j(y), are defined below in (<ref>), the exponents c_i,j are the non-planar variables (without c_i,n) satisfying (<ref>).More generally for any initial triangulation, the n-point tree-level amplitude can be written as ℐ_n^Tr(ϕ^3)=∫_ℝ_>0^n-3∏_I=1^n-3 y_I/y_I ∏_(a, b) u_a,b^α' X_a,b(y)=∫_ℝ_>0^n-3∏_I=1^n-3 y_I/y_I y_I^α' X_I∏_i,j F_i,j (y)^-α' c_i,j,where {y_I>0 } for I=1,⋯, n-3 specify a triangulation of the n-gon, which provides a positive parametrization of U_n^+ (thus the Parke-Taylor form becomes Ω(U_n^+)=∏_I dlog y_I; for each curve/chord, (a,b), the u-variable u_a,b is a nice rational function of y_I, which are discussed in detail in <cit.>, and the Koba-Nielsen factor becomes ∏_I y_I^α' X_I times the product of (n-2)(n-3)/2 F_i,j(y), which are F polynomials of y_I's for this triangulation <cit.>, with exponents -α' c_i,j.Of course (<ref>) is just the tree-level/disk instance of stringy integrals associated with general surfaces S, which correspond to “stringy” Tr(ϕ^3) amplitudes in the genus expansion (details can again be found in <cit.>):I_ S^Tr(ϕ^3)=∫_ℝ_>0^d∏_I=1^dy_I/y_I∏ u_Γ (y)^α' X_Γ ,where given the triangulation of the surface S, we have positive coordinates y_I for I=1,2, ⋯, d and for every curve on S denoted by Γ we have a u-variable u_Γ(y), which is a rational function of y_I, and the kinematic variable X_Γ as its exponent.In section <ref> we concluded that by setting a collection of c_i,j=0, the field theory Tr(ϕ^3) amplitude vanishes, and that by turning one c_⋆ back on, the amplitude factorizes into three pieces, corresponding to lower point amplitudes. Both these properties heavily relied on the Minkowski sum picture for the ABHY associahedron encoding these amplitudes. This pictureemerges from (<ref>) (for any triangulation) in the α'→ 0 limit <cit.>. In this section, we study how the field-theory statements generalize to the stringy Tr(ϕ^3) integral (<ref>).Already in the field theory case, we concluded that we could access all zeros and factorizations by choosing the realization of the ABHY associahedron determined by ray-like triangulations (which produced a triangular region in the kinematic mesh). The same is true for string amplitudes and so we will be mostly considering positive parametrizations {y_I} of (<ref>) corresponding to ray-like triangulations <cit.>. In this case, the F-polynomials have a simple recursive structure, say at n-point with triangulation {y_1,3, y_1,4, ⋯, y_1,n-1}:F_i,j=1+y_1,j+y_1,jy_1,j-1+⋯+y_1,j… y_1,i+2.Since each F_i,j is automatically associated with the non-planar Mandelstam, c_i,j, it is useful to organize them in the mesh picture. In figure <ref> we present the 5-point and 6-point kinematic mesh obtained when considering, respectively, triangulations {(1,3),(1,4)} and {(1,3),(1,4),(1,5)}, as well as the corresponding F-polynomials. Choosing a ray-like triangulation, the kinematic variables appearing in (<ref>) once written in terms of F_i,j are: X_I and c_i,j with (i+1,j+1) ≠ I. This forms a kinematic basis naturally associated with the corresponding triangular region of the kinematic mesh. For example, at 6-point, for triangulation {y_1,3,y_1,4,y_1,5} we have {X_1,3,X_1,4,X_1,5} and {c_1,3,c_1,4,c_1,5,c_2,4,c_2,5,c_3,5}.It is well known that string amplitudes have poles when X_a,b equals some non-positive integer (with X_a,b=0 corresponding to the pole of field-theory amplitudes) <cit.>. However, like in the field-theory case, the kinematical locus where they vanish has not been studied as extensively. Let us now understand how the field theory zeroes identified in section <ref> generalize to the string amplitudes. §.§ Zeros of Tr(ϕ^3) string amplitudes We begin with the simplest case, the 4-point amplitude. Considering the triangulation of the square containing chord (1,3), we get the following string amplitude:ℐ_4^Tr(ϕ^3)=∫_ℝ_>0 y_1,3/y_1,3 y_1,3^α^'X_1,3(1+y_1,3)^-α^'c_13=Γ[α^'X_1,3]Γ[α^'(c_1,3-X_1,3)]/Γ[α^'c_1,3]=Γ[α^'X_1,3]Γ[α^'X_2,4]/Γ[α^'c_1,3],which is exactly the Beta function B(α^'X_1,3,α^'X_2,4).In the field-theory limit (α^'→ 0), we get ℐ_4^Tr(ϕ^3)→ c_1,3/(α^'X_1,3X_2,4), which vanishes for c_1,3=0, asdiscussed previously. However, from the Beta function (<ref>), we see that, in addition to this zero, the full string amplitude vanishes whenever α^' c_1,3 is a non-positive integer.From the integral representation, by setting α^' c_1,3 = -n, with n∈ℕ_0 we get:ℐ_4^Tr(ϕ^3)→∑_k=0^n nk∫_ℝ_>0 y_1,3/y_1,3 y_1,3^α^'X_1,3+k_=0 =0. The integrals appearing in the sum are divergent, however, they all vanish by analytic continuation. The integral of the form ∫_ℝ_>0dy/yy^α^'X bears resemblance to the concept of a scaleless integral within the context of Feynman integrals. The parameter α^' serves as a regulator factor, causing the integral to vanish. Specifically, we have:∫_ℝ_>0dy/yy^α^'X=∫_0^1dy/yy^α^'X+∫_1^∞dy/yy^α^'X=-y^α^'X/α^'X|_y=0+y^α^'X/α^'X|_y=∞. In the first part of the integral, α^' is analytically continued to α^'X> 0, while in the second part of the integral, α^' is analytically continued to α^'X<0. As a result, both sides of the integral evaluate to zero. As we will see shortly the vanishing of this class of one-dimensional integrals will be behind the general patterns of zeros we will describe in string amplitudes.Without loss of generality, let us choose the initial ray-triangulation to be y_1,i for i=3,⋯,n-1, then, from (<ref>), all theF-polynomials depending on y_1,i are those F_ab with 1≤ a ≤ i-2 ,i≤ b≤ n-1. Looking at the mesh, this is saying that all the F-polynomials that depend on y_1,j are contained inside the causal diamond anchored at X_1,j (see figure <ref>). So by setting all the c_a,b=-n_a,b, with n_a,b∈ℕ_0, inside this causal diamond, the stringy integral vanishes because the integration in y_1,i reduces to (<ref>): ℐ_n^Tr(ϕ^3)→∑_k_a_1,b_1,…, k_a_N,b_N=0^n_a_1,b_1,…,n_a_N,b_N(remaining integrals)×∫_ℝ_>0 y_1,i/y_1,iy_1,i^α^'X_1,i+k_a_1,b_1+… + k_a_N,b_N_=0=0,where N corresponds to the number of c's inside the causal diamond considered. Note that the integral in y_1,i can also be considered as the 4-pt integral but setting c to zero, which therefore vanishes.Zeros In field theory, we concluded that the zero locus corresponded to setting the c's inside some causal diamond anchored in X_B to zero. For string amplitudes the zero locus is now given by setting the same collection of c's but to any non-positive integer, i.e. for the mesh corresponding to triangulation (1,3), …, (1,n-1), if we want the zero to come from the integration in y_1,i:α^'c_ab=-n_a,b ,for 1≤ a ≤ i-2 , i≤ b≤ n-1, andn_a,b∈ℕ_0. Since each zero causal diamond is uniquely determined by the X_B it is anchored on, the total number of zeros for the Tr(ϕ^3) field theory amplitude is n(n-3)/2, which is equal to the number of X_i,j's, and so the number of poles. Next, we will study factorization near such zeros. Exactly like in the field theory case, we do this by relaxing one condition: so we set all but one c_a,b inside the causal diamond to non-positive integers. §.§ Factorization around the zerosFor simplicity, let us begin by understanding what happens when we set all the c_a,b inside the causal diamond to zero, but for one of them c_⋆≠ 0. In this case, the amplitude factorizes into three pieces, just like in the field theory case. If instead, we allow the c's to be negative integers, then we get a sum of factorized terms, with interesting kinematic shifts, as we will see shortly. We have already understood the reason for factorization in the field theory limit from the perspective of the Minkowski-sum picture for the associahedron. We will now see a related but different derivation for factorization whose fundamental origin lies in certain separation properties of the F-polynomials appearing in the stringy integral. This will generalize our observations about factorization to the full stringy amplitude [While we will not dwell on this point, our previous derivation centered on Minkowski sums and the one we present now are closely connected, since the Minkowski summands are nothing other than the Newton polytopes of the F-polynomials.]. §.§.§ Examples: n=5,6 We begin by studying our two simple examples of n=5,6 string amplitudes. This will expose the basic mechanism for factorization arising from special properties of the F-polynomials, which motivate simple variable changes on the y variables, giving rise both to factorization of the amplitude as well as precisely the same interesting kinematic shifts we encountered in our field-theoretical analysis. At n=5, the stringy integral reads:ℐ_5^Tr(ϕ^3) =∫_0^∞∏_i=3^4 y_1,i/y_1,i y_1,i^α^'X_1,i∏_i<j F_i,j(y)^-α^'c_i,j= ∫_0^∞ y_1,3/y_1,3 y_1,4/y_1,4y_1,3^α^'X_1,3 y_1,4^α^'X_1,4(1+y_1,3)^-α^'c_1,3(1+y_1,4)^-α^'c_2,4(1+y_1,4+y_1,3 y_1,4)^-α^'c_1,4. Let us consider the zero associated with the skinny rectangle, {c_1,3,c_1,4}. Setting c_1,3=c_1,4=0 the integral becomes:ℐ_5^Tr(ϕ^3)→(∫_0^∞ y_1,3/y_1,3 y_1,3^ α^'X_1,3) ∫_0^∞ y_1,4/y_1,4 y_1,4^ α^'X_1,4(1+y_1,4)^-α^'c_2,4=0. Now letting c_1,4≠ 0, the answer factorizes as follows:ℐ_5^Tr(ϕ^3) →∫_0^∞ y_1,3/y_1,3 y_1,3^ α^'X_1,3∫_0^∞ y_1,4/y_1,4 y_1,4^ α^'X_1,4(1+y_1,4)^-α^'c_2,4(1+y_1,4+y_1,3y_1,4)^-α^'c_1,4=∫_0^∞ y_1,3/y_1,3 y_1,3^ α^'X_1,3∫_0^∞ y_1,4/y_1,4 y_1,4^ α^'X_1,4(1+y_1,4)^-α^'(c_2,4+c_1,4)(1+y_1,3y_1,4/(1+y_1,4))^-α^'c_1,4. Now changing variables to ỹ_1,3 = y_1,3y_1,4/(1+y_1,4), we get:ℐ_5^Tr(ϕ^3) →∫_0^∞ỹ_1,3/ỹ_1,3ỹ_1,3^ α^'X_1,3(1+ỹ_1,3)^-α^'c_1,4∫_0^∞ y_1,4/y_1,4 y_1,4^ α^'(X_1,4-X_1,3)(1+y_1,4)^-α^'(c_2,4+c_1,4 -X_1,3)= ℐ_4^Tr(ϕ^3)(α^' X_1,3,α^'(c_1,4-X_1,3))×ℐ_4^up,Tr(ϕ^3)(α^' (X_1,4-X_1,3),α^'(c_2,4+c_1,4-X_1,4)) = ℐ_4^Tr(ϕ^3)(α^' X_1,3,α^'X_2,5)×ℐ_4^up,Tr(ϕ^3)(α^' X_2,4,α^'X_3,5).We see clearly a direct stringy generalization of the factorization we saw in the field theory limit. We have the 4-point amplitude pre-factor, now as a full stringy amplitude ℐ_4^Tr(ϕ^3)(α^' X_1,3,α^'X_2,5), that vanishes when we further set c_1,4→ 0. Then we have a product of smaller stringy amplitudes, a (trivial) 3-point “down” amplitude and the “up” amplitude ℐ_4^up,Tr(ϕ^3)(α^' X_2,4,α^'X_3,5), whose (interestingly redefined) kinematic variables X_2,4, X_3,5 precisely agree the same way as our previous analysis as summarized in figure <ref>. The mechanism for factorization is easy to see as a consequence of a “separation property” of F-polynomials.After setting c_1,3→ 0, all the non-trivial dependence on y_1,3 is in the F-polynomial (1 + y_1,4 + y_1,4 y_1,3) = (A + y_1,3 B) with A = (1 + y_1,4) and B=y_1,4 and quite nicely, both A, B are polynomials that appear in smaller amplitudes. We can manifest the factorization by the change of variables y_1,3B/A = ỹ_1,3, and this is what induces the interesting kinematic redefinitions in the smaller amplitude factors. At 6-point, choosing once more the triangulation {(1,3),(1,4),(1,5)} for n=6, the stringy integral is given by:ℐ_6^Tr(ϕ^3) =∫_0^∞∏_i=3^5 y_1,i/y_1,i y_1,i^α^'X_1,i∏_i<j F_i,j(y)^-α^'c_i,j= ∫_0^∞∏_i=3^5 y_1,i/y_1,i y_1,i^α^'X_1,i(1+y_1,3)^-α^'c_1,3 (1+y_1,4)^-α^'c_2,4(1+y_1,5)^-α^'c_3,5×(1+y_1,4(1+y_1,3))^-α^'c_1,4(1+y_1,5(1+y_1,4))^-α^'c_2,5(1+y_1,5(1+y_1,4(1+y_1,3)))^-α^'c_1,5. At 6-point we have our two kinds of zeros, the “skinny rectangle” and “big square” patterns. The near-zero factorizations for the skinny rectangles exactly mirror what we have already seen at 5 points so we will look at the square pattern, where setting c_14=c_24=c_15=c_25=0 makes the amplitude vanish. ℐ_6^Tr(ϕ^3) →∫_0^∞∏_i=3^5 y_1,i/y_1,i y_1,i^α^'X_1,i(1+y_1,3)^-α^'c_1,3 (1+y_1,5)^-α^'c_3,5= ∫_0^∞∏_i≠ 4 y_1,i/y_1,i y_1,i^α^'X_1,i(1+y_1,3)^-α^'c_1,3 (1+y_1,5)^-α^'c_3,5(∫_0^∞ y_1,4/y_1,4 y_1,4^α^'X_1,4)_=0=0. By turning on c_1,5, we expect the stringy integral to factorize into two 4-point amplitudes (up and down) and the 4-point prefactor:ℐ_6^Tr(ϕ^3) →∫_0^∞∏_i=3^5 y_1,i/y_1,i y_1,i^α^'X_1,i(1+y_1,3)^-α^'c_1,3 (1+y_1,5)^-α^'c_3,5(1+y_1,5+y_1,4y_1,5+y_1,3y_1,4y_1,5)^-α^'c_1,5=∫_0^∞∏_i=3^5 y_1,i/y_1,i y_1,i^α^'X_1,i(1+y_1,3)^-α^'c_1,3 (1+y_1,5)^-α^'(c_3,5+c_1,5)(1+(1+y_1,3)y_1,4y_1,5/(1+y_1,5))^-α^'c_1,5.So changing variables to ỹ_1,4 = (1+y_1,3)y_1,4y_1,5/(1+y_1,5), we get: ℐ_6^Tr(ϕ^3) →∫_0^∞ y_1,3/y_1,3 y_1,3^α^'X_1,3(1+y_1,3)^-α^'(c_1,3+X_1,4)×∫_0^∞ỹ_1,4/ỹ_1,4ỹ_1,4^α^' X_1,4 (1+ỹ_1,4) ^-α^'c_1,5×∫_0^∞ y_1,5/y_1,5 y_1,5^α^'(X_1,5-X_1,4)(1+y_1,5)^-α^'(c_3,5+c_1,5-X_1,4)=ℐ_4^down,Tr(ϕ^3)(α^' X_1,3,α^'(c_1,3+X_1,4-X_1,3))×ℐ_4^Tr(ϕ^3)(α^' X_1,4,α^'(c_1,5-X_1,4))×ℐ_4^up,Tr(ϕ^3)(α^' (X_1,5-X_1,4),α^'(c_3,5+c_1,5-X_1,5)) = ℐ_4^Tr(ϕ^3)(α^' X_1,4,α^'X_3,6)×ℐ_4^down,Tr(ϕ^3)(α^' X_1,3,α^'X_2,4)×ℐ_4^up,Tr(ϕ^3)(α^' X_3,5,α^'X_4,6). As in the 5-point example the mechanism for factorization is a separation property of the F polynomials. Having set c_1,4,c_2,4,c_2,5→ 0, the only non-trivial dependence on y_1,4 is in the F-polynomial 1 + y_1,5 + y_1,4 y_1,5 + y_1,3 y_1,4 y_1,5 = A + y_1,4 B, where A = (1 + y_1,5), B = y_1,5(1 + y_1,3). Again nicely A,B are (up to monomial factors) F-polynomials for smaller amplitudes. We then make the change of variable to B y_1,4/A = ỹ_1,4, and this induces the interesting kinematic shifts for the “up” and “down” four-point amplitudes.§.§.§ General Proof The general proof of factorization works in exactly the same way as we just saw in our examples; apart from decoration with indices, the steps are exactly the same as we saw above. We consider the zero associated with maximal causal diamond anchored to X_1,i, and consider turning back on c_⋆=c_k,m≠ 0. Then stringy integral factorizes as follows:ℐ_n^Tr(ϕ^3) →∫∏_l=3^i-1 y_1,l/y_1,l y_1,l^α^' X_1,l∫∏_j=i^n-1 y_1,j/y_1,j y_1,j^α^' X_1,j∏_1≤ a< b-1≤ i-1 F_a,b(𝐲)^-α^' c_a,b×F_k,m(𝐲)^-α^' c_k,m×∏_i-1≤ e< f-1< n-1 F_e,f(𝐲)^-α^' c_e,f. As in our examples, the key point is that the only F-polynomial depending on y_1,i is F_k,m(𝐲)=1+y_1,m+…+y_1,m⋯ y_1,k+2≡ A+y_1,i B, with A=F_i-1,m and B=F_k,i-1∏_p=i+1^my_1,p, which suggests the variable change B y_1,i/A = ỹ_1,i. Following our noses this will give the factorization for the amplitude into smaller string amplitudes, with the same kinematical redefinition encountered in the field theory limit. Working everything out explicitly, we easily perform the integral over y_1,i:∫ y_1,i/y_1,i y_1,i^α^' X_1,iF_k,m(𝐲)^-α^' c_k,m= ∫ y_1,i/y_1,i y_1,i^α^' X_1,i (A+y_1,i B)^-α^' c_k,m=A^-α^' c_k,m+α^' X_1,iB^-α^' X_1,i∫ỹ_1,i/ỹ_1,iỹ_1,i^α^' X_1,i (1+ỹ_1,i )^-α^' c_k,m=A^-α^' c_k,m+α^' X_1,iB^-α^' X_1,i×ℐ_4^Tr(ϕ^3)(α^' X_1,i,α^'(c_k,m-X_1,i))_ℐ_4^Tr(ϕ^3)(α^' X_B,α^' X_T),where in the second line we used our change variables ỹ_1,i = B y_1,i/A, which reduces the integral to the 4-point stringy integral. This 4-point factor is exactly analogous to the one we saw in the factorization of the field theory amplitudes, (1/X_B+ 1/X_T), except that now we get indeed the full 4-point string amplitude. Exactly in the same way as in the field theory, this factor makes manifest that if we further set c_k,m to a non-positive integer the whole amplitude vanishes.Plugging this result back into (<ref>), ∫∏_l=3^i-1 y_1,l/y_1,l y_1,l^α^' X_1,l∏_1≤ a< b-1≤ i-1 F_a b(𝐲)^-α^' c_a b∫∏_j=i+1^n-1 y_1,j/y_1,j y_1,j^α^' X_1,j∏_i-1≤ e< f-1< n-1 F_e,f(𝐲)^-α^' c_e,f× A^-α^' c_k,m+α^' X_1,iB^-α^' X_1,i×ℐ_4^Tr(ϕ^3)(α^' X_1,i,α^'(c_k,m-X_1,i))=∫∏_l=3^i-1 y_1,l/y_1,l y_1,l^α^' X_1,l∏_1≤ a< b-1≤ i-1 F_a,b(𝐲)^-α^' c_a,b^'∫∏_j=i+1^n-1 y_1,j/y_1,j y_1,j^α^' X_1,j^'∏_i-1≤ e< f-1< n-1 F_e,f(𝐲)^-α^' c_e, f^'×ℐ_4^Tr(ϕ^3)(α^' X_1,i,α^'(c_k,m-X_1,i))≡ℐ_i^down,Tr(ϕ^3) ×ℐ_n-i+2^up,Tr(ϕ^3)×ℐ_4^Tr(ϕ^3)(α^' X_1,i,α^'(c_k,m-X_1,i)),where in the third line we replace A(y⃗) and B(y⃗), and define the shifted exponents c^' and X^'. We have that: c_a,b^'=c_a,b except for c_k,i-1^'=c_k,i-1+X_1,i, while c_e,f^'=c_e,f except for c_i-1,m^'=c_i-1,m+c_k,m-X_1,i. Finally we also have X_1,j^'=X_1,j-X_1,i, for j=i+1,…,m. As we will see shortly, these shifts are exactly such that the up and down amplitudes depend on exactly the same kinematics as the respective up and down amplitudes in the field theory case, i.e. the lower point string amplitudes appearing in the factorization also follow the kinematic shifts summarized in figure <ref>. Looking at the kinematic mesh,ℐ_i^down,Tr(ϕ^3) denotes the string amplitude corresponding to the down triangular mesh, which isℐ_i^Tr(ϕ^3)(1,…,i-1,I) with p_I=-∑_j=1^i-1p_j, but now with shifted kinematic invariants:ℐ_i^down,Tr(ϕ^3) =∫∏_l=3^i-1 y_1,l/y_1,l y_1,l^α^' X_1,l∏_1≤ a< b-1≤ i-1 F_a,b(𝐲)^-α^' c_a,b^'= ℐ_i^Tr(ϕ^3)(1,…,i-1,I)|_X_l,i→ X_l,i+X_1,i=X_l,n, forl=2,…,k.,where the shifted rules for the kinematic invariant are shifting the X_l,i to X_l,i+X_1,i, which is equal to X_l,n since the zero conditions, for l=2,…,k. These shifted rules are equivalent to the definition of c_a,b^'. In turn, ℐ_n-i+2^up,Tr(ϕ^3) denotes as the string amplitude corresponding to the upper triangular mesh, which is ℐ_n-i+2^Tr(ϕ^3)(i,…,n,J) with p_J=-∑_j=i^np_j, but with shifted kinematic invariant:ℐ_n-i+2^up,Tr(ϕ^3) =∫∏_j=i+1^n-1 y_1,j/y_1,j y_1,j^α^' X_1,j^'∏_i-1≤ e< f-1< n-1 F_e,f(𝐲)^-α^' c_e,f^'=∫∏_j=i+1^n-1 y_i-1,j/y_i-1,j y_i-1,j^α^' X_i-1,j^'∏_i-1≤ e< f-1< n-1 F_e,f(𝐲)^-α^' c_e, f^'=ℐ_n-i+2^Tr(ϕ^3)(i,…,n,J)|_X_i-1,j→ X_i-1,j-X_i-1,n=X_1,j,forj=m,…,n-1.,where in the second line, we change the integration variables y_1,j to y_i-1,j. The kinematic shifts send X_i-1,j to X_i-1,j-X_i-1,n, which is equal to X_1,j since the zero conditions, for j=m,…,n-1. This shifted rules are equivalent to the definition of c_e,f^' and X_1,j^'. In summary, by picking a zero causal diamond, c_ab=0,for1≤ a ≤ i-2 ,i≤ b≤ n-1, and letting c_k,m≠ 0, the stringy integral for Tr(ϕ^3) factorizes into lower point amplitudes according to:ℐ_n^Tr(ϕ^3) →ℐ_i^down,Tr(ϕ^3)×ℐ_n-i+2^up,Tr(ϕ^3)×ℐ_4^Tr(ϕ^3)(α^' X_1,i,α^'(c_km-X_1,i)). The shifted rules for up and down amplitudes are exactly those presented in figure <ref>, and can be summarized as follows:X_l,i→ X_l,i+X_1,i=X_l,n ,for l=2,…,k.X_i-1,j→ X_i-1,j-X_i-1,n=X_1,j ,for j=m,…,n-1.§.§.§ Factorization for general negative integersWe have seen that when all the mesh constants but one in a maximal causal diamond are set to zero, the amplitude simplifies by factoring to a product of smaller amplitudes with non-trivially modified kinematics.Just as the statement about zeros extends to the full string amplitude when the mesh constants are set more generally either to zero or negative integers, the near-zero factorizations also generalize to this case. We will see that instead of simply factoring into a product of smaller amplitudes, we get an interesting sum over products of smaller amplitudes with redefined kinematics. To see how this works let us consider as an example a skinny rectangle factorization where we set c_1,3=-n_1,3, c_1,5 = -n_1,5 but we turn on c_1,4. The stringy integral becomes I_6 = ∫_0^∞ y_1,3 y_1,4 y_1,5/y_1,3 y_1,4y_1,5y_1,3^X_1,3 y_1,4^X_1,4 y_1,5^X_1,5× (1 + y_1,3)^n_1,3 (1 + y_1,5(1 + y_1,4(1 + y_1,3)))^n_1,5×(1 + y_1,4(1 + y_1,3))^-c_1,4(1 + y_1,4)^-c_2,4 (1 + y_1,5(1 + y_1,4))^-c_2,5 (1 + y_1,5)^-c_3,5 . Relative to our usual expressions when mesh constants are set to zero, we have the extra factor on the first line, (1 + y_1,3)^n_1,3 (1 + y_1,5(1 + y_1,4(1 + y_1,3)))^n_1,5. The important fact is that for n_1,3, n_1,5 positive integers, this factor is a finite polynomial in y_1,3,y_1,4, y_1,5, we have (1 + y_1,3)^n_1,3 (1 + y_1,5(1 + y_1,4(1 + y_1,3)))^n_1,5 = ∑_k_1,3,k_1,4,k_1,5 C_k_1,3,k_1,4,k_1,5(n_1,3,n_1,5) y_1,3^k_1,3 y_1,4^k_1,4 y_1,5^k_1,5,where C_k_1,3,k_1,4,k_1,5(n_1,3,n_1,5) are constants arising simply from performing the multinomial expansions of each term; while these can be trivially computed the detailed expressions are not important for us. But at this point, the expression is precisely the one we encountered previously when the mesh constants were set to zero, except as a weighted sum with weights C_k_1,3,k_1,4,k_1,5 of the previous amplitudes with kinematics shifted as X_1,3→ X_1,3 + k_1,3, X_1,4→ X_1,4 + k_1,4, X_1,5→ X_1,5 + k_1,5. For instance if we set n_1,3=1, n_1,5 = 1, we have (1 + y_1,3)^1 (1 + y_1,5(1 + y_1,4(1 + y_1,3)))^1 = 1 + y_1,3 + y_1,5 + y_1,3 y_1,5 + y_1,4 y_1,5 + 2 y_1,3 y_1,4 y_1,5 + y_1,3^2 y_1,4 y_1,5.Now recall that for factorization when mesh constants are set to zero,we have the factorization I_6 → F_4(X) F_5(X) where F_4(X) = Γ[X_1,3] Γ[X_2,6]/Γ[X_1,3 + X_2,6] and F_5(X) =I_5(X_2,4, X_2,5→ X_1,5, X_3,5,X_3,6,X_4,6). Then at n_1,3 = n_1,5 = 1 we instead have the factorization I_6→ (F_4 F_5) + (F_4 F_5) (X_1,3→ X_1,3 + 1) + (F_4 F_5)(X_1,5→ X_1,5 + 1) + (F_4 F_5)(X_1,3→ X_1,3 + 1, X_1,5→ X_1,5 + 1) + (F_4 F_5)(X_1,4→ X_1,4 + 1, X_1,5→ X_1,5 + 1) + 2 (F_4 F_5)(X_1,3→ X_1,3 + 1, X_1,4→ X_1,4 + 1, X_1,5→ X_1,5 + 1) + (F_4 F_5)(X_1,3→ X_1,3 + 2, X_1,4→ X_1,4 + 1, X_1,5→ X_1,5 + 1). The story for near-zero factorizations when mesh constants are set to general negative integers is the same. We always get a factor which is a polynomial P in all the y's associated with the F polynomials raised to integer powers, which we can simply expand to get a big polynomial in the y's. This then gives us a factorization of exactly the same form as with vanishing mesh constants, but as a sum over shifted kinematics determined by the polynomial P.§ STRINGY ΔEFORMATIONIn this section, we propose a class of “universal” stringy models as a one-parameter deformation of the tree-level stringy Tr(ϕ^3) amplitude for an even number 2n of particles, which will be the basis of our unification of Tr(ϕ^3), pion and gluon amplitudes. This deformation amounts to inserting a factor (∏ u_e,e/∏ u_o,o)^α^'δ in the string integral (<ref>), where we take the product of all u_a,b with a,b both being even, over the product of u_a,b both being odd, and δ is the deformation parameter:ℐ_2n^δ=∫_ℝ_>0^2n-3∏_I=1^2n-3 y_I/y_I ∏_(a, b) u_a,b^α' X_a,b(∏_(e,e)u_e,e/∏_(o,o)u_o,o)^α^'δ,expanding this ratio, we see that this factor is simply shifting the kinematics, X_a,b, of the un-deformed, stringy Tr(ϕ^3) amplitude:I_2n^δ= I_2n^Tr(ϕ^3) [α' X_e,e→α'( X_e,e + δ), α' X_o,o→α' (X_o,o-δ)] . In addition, we claim that for different values of α^'δ we get different colored theories: * α^'δ =0: In this case, we get back the usual stringy Tr(ϕ^3) integral, so at low energies, we get the field theory amplitudes of the Tr(ϕ^3) Lagrangian: ℒ_Tr(ϕ^3)= Tr (∂ϕ)^2+ g Tr(ϕ^3).* α^'δ∈ (0,1): In this case we claim that, at low energies, we get field theory amplitudes of the U(N) Non-linear Sigma Model (c.f. <cit.>) :ℒ_NLSM=1/8 λ^2Tr(∂_μU^†∂^μU),with U=(𝕀+λΦ)(𝕀-λΦ)^-1. * α^'δ =1: In this case, we claim that, at low energies, we get Yang-Mills theory (YM). In fact, it is not simply YM, but instead gluons and adjoint scalars (YMS) (these YMS amplitudes have been studied in e.g. <cit.>), with the following Lagrangian: ℒ_YMS=-Tr(1/4 F^μν F_μν+1/2 D^μϕ^I D_μϕ^I-g_ YM^2/4∑_I ≠ J[ϕ^I, ϕ^J]^2). In the rest of this section, we will study in detail how this deformation works. However, for a more complete explanation of each shift see <cit.>. There are multiple ways to motivate why this new factor is the correct deformation <cit.>. Notwithstanding, in the context of this note, what interests us the most is to understand how it explains the fact that the zeros and factorizations are present for these three colored theories. Indeed it turns out that this is the only kinematic shift that one can do on the X_i,j's that preserves the c_i,j's! We will prove this shortly but let us start by understanding why this is the case and how it implies the generalization of the zeros/factorizations for any value of the deformation. From equation (<ref>), establishing the relation between the non-planar variables to the planar ones, we can easily see that by shifting X_e,e→X_e,e + δ and X_o,o→X_o,o-δ, while keeping X_o,e unchanged, the shift exactly cancels in (<ref>) thus preserving all c_i,j's. This means that independent of the underlying triangulation we choose, 𝒯, to parametrize u_i,j[y_𝒯], the c's appearing in the exponents of the F-polynomials in the string integral remain unchanged. Therefore the result of this shift can only change the exponents of y_i. Note that our derivation in sections <ref> and <ref> of the zeros/factorizations of the stringy integral is independent of these exponents of y_i. Therefore, the fact that the c_i,j's remain unchanged under this shift implies that the zeros and factorizations are also true for I^δ_n!We stress that the existence of zeros and factorizations for both field theory and string theory amplitudes is made obvious from the associahedron and the stringy δeformation we describe in this section. We expect that with these statements in hand, it should be possible to prove them from different starting points as well. For instance we have understood how the zeros and factorization of Tr ϕ^3, NLSM, and YMS field-theory amplitudes can be proven starting from their CHY formulas <cit.>, though the formalism doesn't make these facts obvious. We find it more natural and satisfying that these hidden properties of both string and field-theory amplitudes, together with the startling unity of all these colored theories, are manifested via stringy δeformation.§.§ Uniqueness of kinematical shift The striking fact is not only that this shift exactly preserves the non-planar variables, but it actually is the unique shift that does so.To prove this, we start by noticing that in order to preserve all c_i,j of a ray-like triangulation, it suffices to specify shifts of the planar variables X_a,b in the triangulation. This is because we can solve for all the remaining X variables in terms of the c_i,j's and X_a,b. While any shifts of n-3 initial X_a,b preserve c_i,j appearing for this triangulation, asking for the shift to preserve all the c_i,j, so that we have all the zeros and factorizations is much more constraining. Let us choose a specific triangulation, say the ray-like one with (1,3), (1,4), ⋯, (1, n-1), and specify the initial shifts asX_1,i→ X_1,i + δ_1,i, for i=3,4, ⋯, n-1. In order to preserve c_i,j of this triangulation (with 1≤ i<j<n) we must shift X_a,b according to the solution of (<ref>); in particular, we find the following shifts for X_i,n (the variables in the opposite edge of the triangular region):X_i,n→ X_i,n -δ_1, i+1, for i=2,3, ⋯, n-2. In other words, X_2,n must be shifted by the same amount but in the opposite direction as X_1,3, same for X_3,n and X_1,4, and so on. Now how can we preserve the remaining c_i,n for i=2,3,⋯, n-2? Note c_2,n=X_2,n+X_1,3-X_3,n and the shifts of X_2,n and X_1,3 cancel on the RHS, thus to preserve c_2,n we must have the shift of X_3,n vanishes, δ_1,4=0. To preserve c_3,n=X_3,n+X_1,4-X_1,3-X_4,n, we must have δ_1,5=δ_1,3. Similarly we find δ_1,6=0, δ_1,7=δ_1,5 and so on. Thus the only c-preserving shifts correspond to δ_1, e=0 and all δ_1,o equal. However, for odd n we further conclude that δ_1, o=0 as well, and only for even n we can have non-vanishing and equal δ_1,o, which we call -δ. This leads to δ X_e,e=-δ X_o,o=δ, and δ X_o,e=0, which are the only possible c-preserving shifts! §.§ Realization of the shift on momentaWe have expressed our shift directly in terms of its action on our basis of invariants X_i,j, but we can of course also describe it explicitly in terms of a shift on the momenta directly. It is actually slightly more convenient to describe the shift in terms of the vertices x_i^μ of the momentum polygon, from which the shift on the momenta p_i^μ = (x_i+1^μ - x_i^μ) can be inferred. To realize the shift for an 2n particle process, we imagine adding 2n dimensions of spacetime, orthogonal to the ones the original momentum polygon lives in, which are grouped into n pairs of different timelike and spacelike directions t_a^μ, s_a^μ for a=1, ⋯, n.So we have s_a · x_j = t_a · x_j = 0 for all a, j, and also t_a · t_b,t_a · s_b,s_a · t_b,s_a · s_b = 0 for a ≠ b. Finally we normalize t_a^2 =δ/2, s_a^2 = -δ/2.Then we can define the shift x^μ_2k→ x^μ_2k + t^μ_k, x^μ_2k+1→ x^μ_2k+1 + s^μ_k,sending X_2k, 2l = (x_2k - x_2l)^2→(x_2k - x_2l + t_k - t_l)^2 = X_2k,2l + δ ,X_2k+1,2l+1 = (x_2k+1 - x_2l +1)^2→(x_2k+1 - x_2l + 1 + s_k - s_l)^2 = X_2k+1,2l+1 - δ ,X_2k, 2l+1 = (x_2k - x_2l+1)^2→(x_2k - x_2l+1 +t_k - s_l)^2 = X_2k,2l+1,which is just our kinematic shift. §.§ 4-point amplitudesIn order to gain more intuition for the physics of our shifts, it is useful to study the shifted four-particle amplitude in some detail. Recall that with our shifts we have I^δ_4 = Γ[α^' (X_1,3 - δ)] Γ[α^'(X_2,4 + δ)]/Γ[α^'(X_1,3 + X_2,4)]. Of course for δ = 0, the low-energy amplitude is 1/α^' X_1,3 + 1/α^' X_2,4 of the massless Tr(ϕ^3) theory. We can get an idea of the massive spectrum of states in the UV completion by looking at the residue on the first massive pole at e.g. α^' X_1,3 = -1. This residue is (1 - α^' X_2,4), and using the familiar translation from s,t variables to center-of-mass energies and angles, t = ( cosθ - 1)s/2, the residue at α^' X_1,3 = α^' s = -1 becomes ( cosθ + 1)/2. The angular dependence on cosθ allows to read off that at this mass level we have the exchange of a particle of spin 0 and one of spin 1. Let us now turn on δ and see what happens as we vary α ^'δ from very small values near 0, to intermediate fractional values, and then near α^'δ→ 1 (see figure <ref>). At very low-energies for α^' X_1,3, α^' X_2,4≪ 1, we have I^δ_4 →α^'(Γ[-α^'δ] Γ[+α^'δ] ) × (X_1,3 + X_2,4),giving us the amplitude for NLSM with λ^2 = α^'Γ[-α^'δ] Γ[+ α^'δ].We can again get an idea of the spectrum of massive states, by looking at the first massive level; this is actually a shifted version of the massless pole we had at δ = 0; the residue at X_1,3 = δ is simply 1! Thus we learn that at the first massive level where X_1,3 = δ, we are exchanging a massive spin-0 particle. If α ^'δ≪ 1, there is a separation of scales between this state and the rest of the string states. At very low energies where X ≪δ, we have the amplitude for pions. At X_1,3 = δ we encounter an extra massive spin-0 particle. The amplitudes in the intermediate region δ≪ X_1,3,X_2,4≪1/α^' is simply that of the Tr(ϕ^3) theory, which softens the UV power-law growth of the low-energy NLSM amplitudes into falling 1/X behavior. And ultimately for energies above the string scale α^' X_1,3,α^' X_2,4≫ 1 we see the tower of stringy excitations and the softest UV behavior characteristic of string theory. When α^'δ is no longer small, but say near ∼ 1/2, there is no separation between the first massive scalar and the rest of the string states, and so we get a purely stringy UV completion with no intermediate Tr(ϕ^3) regime. The situation becomes more interesting as we approach α^'δ→ 1. Let us put α^'δ = 1 - ϵ. Then we have a light state X_1,3 = - ϵ. The residue on this pole is Γ[α^' X_2,4 + 1 - ϵ]/Γ[X_2,4 - ϵ] = (X_2,4 - ϵ). Again translating to energies and angles, we see that at this small mass we are exchanging both a spin-0 and a spin-1 particle. Thus, for α^'δ very close to 1, we see pion amplitude at low energies, but encounter, instead of a massive spin-0 particle as we did for α^'δ close to 0, a massive spin-1 particle (and a scalar), once again softening the UV behavior of the low-energy theory. This is familiar physics from real-world QCD, where the ρ meson plays the dominant role in the unitarization of pion scattering. This makes it clear what we must expect at α^'δ→ 1. The light massive spin-1 particle becomes massless in this limit, and the only consistent theory we could be describing is Yang-Mills theory! Of course we have an amplitude for external scalars, and so we are describing colored scalars coupled to gluons. Exactly at α^'δ = 1, the amplitude is I^δ = 1_4 = Γ[α^' X_1,3 - 1] Γ[α^' X_2,4 + 1]/Γ[α^'(X_1,3 + X_2,4)]. Note that these shifts keep a massless gluon pole at X_1,3 = 0, but remove it at X_2,4 = 0. Thus we must interpret this amplitude as that of two different colored scalars A,B, in the configuration 1^A 2^A 3^B 4^B, so that the X_1,3 channel gluon exchange is allowed by the X_2,4 channel exchange is forbidden. This interpretation is easily confirmed by looking at the low-energy limit, where the amplitude becomes X_2,4/X_1,3+1, precisely corresponding to X_1,3 channel gluon exchange for 1^A 2^A 3^B 4^B scattering plus the four-vertex contact diagram. As we will now see, this story generalizes for all shifted 2n particle amplitudes. For δ =0 we have the amplitudes for Tr(ϕ^3) theory at low energies. Instead for general fractional δ, the low-energy amplitudes are those of NLSM. When α^'δ is small, the NLSM amplitudes are first UV softened into those of Tr(ϕ^3) at energies above ∼δ, before being further UV softened into string amplitudes above the scale 1/α^'. For α^'δ of order one, the UV softening of the low-energy NLSM amplitudes is purely stringy. But as α^'δ increases further and approaches one, colored massive spin-1 particles descend from the string states, and at α^'δ = 1, we are describing the amplitudes for pairs of n distinct colored scalars, ordered asM_1,⋯, n ( 1^A_1 2^A_1 3^A_2 4^A_2⋯ (2n - 1)^A_n (2n)^A_n).As we will discuss further below and explore at much greater length in <cit.>, these amplitudes give us, inter-alia, direct access to n-gluon amplitudes, by factorizing 2n-scalar amplitudes on gluon poles where (p_2k - 1 + p_2k)^2 → 0.§.§ α^'δ∈ (0,1) and the Non-linear Sigma ModelFor α^'δ non-integer we claim thatℐ_2n^δ=∫_ℝ_>0^2n-3∏_I=1^2n-3 y_I/y_I ∏_(e, e) u_e,e^α' (X_e,e+δ)×∏_(o, o) u_o,o^α' (X_o,o-δ)×∏_(o, e) u_o,e^α'X_o,e ,yields NLSM tree-level amplitudes at low energies, i.e. in the α^'→ 0 limit, thus explaining all the mysterious zeros and factorizations observed for these amplitudes. To show that this is the case let us understand the consequences of this shift. For δ=0, the fact that the string amplitude has poles associated with massless resonances for X_i,j→0, is because u_i,j^α^' X_i,j→ 1, and thus the integral develops a singularity in one of the integration boundaries. This is particularly easy to see for the X_i,j∈𝒯: in this case, the integral becomes singular for X_i,j=0 because it diverges near y_i,j=0 as we see in (<ref>). For δ≠ 0, we lose the poles corresponding to X_e,e,X_o,o=0, since the divergences of the integral are still regulated by δ. Note that poles corresponding toX_e,e,X_o,o chords are associated with propagators enclosing odd-point interactions. Therefore, at leading order in the low energy expansion, we only have poles when X_o,e=0, which are precisely the propagators appearing in diagrams built only out of even-point interactions. This is exactly the pole structure that we expect for NLSM amplitudes, and thus the first hint that we are going in the correct direction.Now provided we have the correct pole structure, i.e. we don't have any poles for X_e,e,X_o,o=0, then the fact that the amplitude vanishes in the skinny-rectangle type zero (which it does because the undeformed amplitude does) implies that this amplitude has vanishing soft limit, i.e. it has the Adler zero. This is exactly what we concluded before just for the case of field theory NLSM, that our zero implies the Adler zero for these amplitudes.Finally, just from u-equations, we have that on the X_o,e→ 0, the amplitude (<ref>) factorizes into the product of the respective lower-point amplitudes. At tree-level, the Adler zero together with factorization ensure that the low energy limit corresponds to NLSM amplitude <cit.>. Alternatively, we could make the same conclusion via the uniqueness theorem <cit.> even without the knowledge of factorization. For this note, this is enough since we are interested in understanding the zeros and factorizations of tree-level amplitudes. However, as it is explained in <cit.> this shift also allows us to obtain NLSM loop-level amplitudes.This way we have proved that (<ref>) defines a new completion of the NLSM amplitudes. Different stringy completions of the NLSM have been proposed <cit.>, but none make the zeros and factorizations manifest. In both cases, the stringy completion is manifestly cyclic, as opposed to our stringy completion, which is manifestly not cyclically symmetric but in which the leading order at low energies restores the cyclic symmetry expected for NLSM amplitudes.We thus see that the UV completion provided by this stringy formulation is not a familiar one. To understand this better let us look directly at what happens in the field-theory limit. §.§.§ NLSM from field theory Tr(ϕ^3)To extract the field theory limit, let us assume that we also have α^'δ≪ 1. Therefore we obtain:ℐ_2n^δ= ∫_ℝ_>0^2n-3∏_I=1^2n-3 y_I/y_I ∏_(e, e) u_e,e^α' (X_e,e+δ)×∏_(o, o) u_o,o^α' (X_o,o-δ)×∏_(o, e) u_o,e^α'X_o,e →𝒜_2n^Tr(ϕ^3)( X_e,e→ X_e,e + δ, X_o,o→X_o,o-δ),where 𝒜_2n^Tr(ϕ^3) stands for the field theory amplitude in Tr(ϕ^3) theory. Finally to get the real low energy behavior we need to further expand in X≪δ or, equivalently, δ→∞. From our previous argument, we have that the leading non-vanishing order in this expansion is the NLSM amplitude.This means that we can get the NLSM amplitude directly from the Tr(ϕ^3) field theory amplitude: A_2n^ NLSM=lim_δ→∞δ^2n-2 A_2n^Tr(ϕ^3)( X_e,e→ X_e,e + δ,X_o,o→X_o,o-δ),where the prefactor δ^2n-2 is there ensure the correct units.So we have that the UV completion provided by this stringy integral is one in which the NLSM is given as the low energy limit of a theory of colored massive scalars. These scalars can be regular scalars with mass^2=δ or tachyons with mass^2=-δ. This is certainly an unfamiliar UV completion of the NLSM. Apart from the unusual presence of positive and negative mass^2 particles with precisely equal magnitudes, the UV amplitude is not cyclically invariant, while the NLSM amplitudes certainly are. Indeed the full UV amplitude does have a cyclic symmetry under i → i+1 but only if we also flip the sign δ→ -δ. This implies that in the 1/δ expansion, all terms with even powers of δ will be cyclically invariant while those with odd powers of δ will pick up a minus sign under cyclic shift. Quite beautifully, for the shifted 2n particle amplitude, after the naively leading powers of 1/δ all cancel, we are left with an amplitude that begins with an even power 1/δ^2(n-1) and hence is cyclically invariant as desired. A simple Lagrangian that generates the shifted Tr(ϕ^3) amplitudes and explains the non-cyclic nature of the UV completion will be presented in <cit.>.4-point Let us now look at the 4-point amplitude. Starting from the Tr(ϕ^3) and performing the shift we get:𝒜_4^Tr(ϕ^3)( X_1,3→ X_1,3 - δ,X_2,4→X_2,4+δ) = 1/X_1,3-δ + 1/X_2,4+δ,now expanding in δ≫ 1 yields:𝒜_4^Tr(ϕ^3)( X_1,3 - δ,X_2,4+δ) 1/δ (1 -1) - 1/δ^2(X_1,3+X_2,4)_ 𝒜_4^NLSM + 𝒪(1/δ^3),where indeed the first order cancels and the leading non-vanishing order gives the pion amplitude. Already in this small 4-point problem, we can appreciate how important it is that the mass of X_e,e is minus that of the mass of X_o,o. If this was not the case the leading order would be non-vanishing and we would not get the NLSM, instead, it would be closer to ϕ^4 theory.6-point At 6-point it is convenient to start by writing the Tr(ϕ^3) amplitude in a way that makes manifest the X_o,e poles, as follows: 𝒜_6^Tr(ϕ^3)( X → X ±δ) = 1/X_1,4( 1/X_1,3 + 1/X_2,4)(1/X_4,6 + 1/X_1,5) + (cyclic, i→ i+2)+ 1/X_1,3X_3,5X_1,5+ 1/X_2,4X_4,6X_2,6. Performing the shift on the 6-point Tr(ϕ^3) amplitude yields:𝒜_6^Tr(ϕ^3)( X → X ±δ) = 1/X_1,4( 1/X_1,3-δ + 1/X_2,4+δ)(1/X_4,6+δ + 1/X_1,5-δ) + (cyclic, i→ i+2) + 1/(X_1,3-δ)(X_3,5-δ)(X_1,5-δ)+ 1/(X_2,4+δ)(X_4,6+δ)(X_2,6+δ),gathering and expanding in δ≫ 1 we get:𝒜_6^Tr(ϕ^3)( X → X ±δ)-1/δ^4(- (X_1,3+X_2,4)(X_1,5+X_4,6)/X_1,4 + (cyclic,i→ i+2) . .+ X_1,3+X_3,5+X_1,5+X_2,4+X_4,6+X_2,6) +𝒪(1/δ^5),which we can identify with the 6-point NLSM amplitude, 𝒜_6^NLSM.§.§.§ Factorizations near zeros The identifications of NLSM amplitudes with those of Tr(ϕ^3) theory with the simple shifted kinematics X_ee→ X_ee + δ, X_oo→ X_oo - δ, X_e o→ X_e o allows to also very simply understand the pattern of factorization near zeroes we had observed experimentally in section <ref>. Precisely because these shifts are c-preserving, at the level of the Tr(ϕ^3) amplitudes, the factorization patterns are precisely the same before and after the shifts.To begin with, we consider the near-zero factorizations for 2n particle amplitudes into “even points × even points”, even when taking into account the necessary kinematic shifts, we still end up with the same δ shifts for the X_ee,X_oo,X_eo for each of the lower point amplitudes. This proves that the “even × even” near-zero factorizations for NLSM amplitudes simply factor into the product of NLSM amplitudes, just as we saw in section <ref>. The case of near-zero factorization to “odd × odd” amplitudes is somewhat more interesting, since as we observed experimentally in section <ref>, we encounter the mixed amplitudes for cubic scalars ϕ and pions π. We can now easily understand the reason for this, as well as the interesting rule for “who is a ϕ and who is a “π” we delineated in section <ref>. As an example let us consider the n=6factorization associated the upper skinny rectangle in figure <ref>, where we turn on c_2. Let us focus on the 5-point factor, with the appropriate kinematic replacements. For clarity, we will denote the kinematics of the n=5 problem by Y variables Y_ij. So all the Y_ij = X_ij except of course Y_15 = 0, and we have the kinematic replacement Y_25 = X_26.We can now perform the δ shift on all the variables: Y_1,3→ Y_1,3 - δ, Y_1,4→ Y_1,4, Y_2,4→ Y_2,4 + δ, Y_2,5 = X_2,6→ X_2,6 + δ = Y_2,5 + δ, Y_3,5→ Y_3,5 - δ. We denote the effect of this on the 5-point mesh by attaching a “+/-” to each variable as denoted in figure <ref> presented at the end of the paper.We can now see that the shifts in the n=5 mesh do not preserve all the c's. Nonetheless, some of the c's are preserved, as represented in the shaded meshes in the figure. This picture enables us to see that this n=5 factor does not have all of our zeros; not all the skinny rectangles remain unshifted. However, some of the zeros do survive: the ones naturally associated with soft limits for particles 1,3,5 are still clearly present in the picture. Now for particles 1,3, we can see that the collinear poles (p_1 + p_2)^2 = Y_1,3→ Y_1,3 - δ, (p_5 + p_1)^2 = Y_2,5→ Y_2,5 + δ are shifted, so these massless poles are absent, and hence the skinny rectangle zero implied an Adler zero when particle 1 becomes soft. The same holds for particle 3. However, for particle 5, we have that that collinear pole (p_4 + p_5)^2 = Y_1,4 is unshifted, and hence the skinny rectangle zero does not imply a soft zero for particle 5. It is easy to see that factorization of the Tr(ϕ^3) amplitude after the shifts implies that the remaining particles must be interpreted as scalars with a Tr(ϕ^3) coupling. So we have learned that the 5-pt factor in the near zero factorization associated with turning on c_2 gives us a mixed amplitude A^ NLSM + ϕ^3(π, ϕ, π, ϕ, ϕ) <cit.>. This argument extends to the general pattern of “odd × odd” factorizations of NLSM amplitudes explained in section <ref>.§.§ α^'δ =1 and scaffolded gluonsNow let us explore what happens when the deformation becomes one in string units, i.e. α^'δ =1. Then the stringy integral becomes: ℐ_2n^δ=∫_ℝ_>0^2n-3∏_I=1^2n-3 y_I/y_I ∏_(a, b) u_a,b^α' X_a,b∏_(e,e)u_e,e/∏_(o,o)u_o,o,as explained in <ref>, at low energies, we expect to get a theory of colored scalars and gluons, just like that given by Lagrangian (<ref>). At tree-level, we can see that (<ref>) is what we get in the bosonic string amplitude, describing the scattering of 2n gluons, by choosing a particular kinematic configuration. This connection will make clear the origin of the scalars.§.§.§ Bosonic string connectionThe open bosonic string amplitude for 2n gauge bosons with polarizations ϵ_i is given by:𝒜^tree_n(1,2,…,2n) = ∫^2n z_i/SL(2,ℝ) ( ∏_i<j z_i,j^2α^' p_i · p_j). exp( ∑_i≠ j 2 ϵ_i ·ϵ_j /z_i,j^2 - √(α^')ϵ_i · p_j/z_i,j) |_multi-linear in ϵ_i,where z_i,j=z_i-z_j are the usual worldsheet coordinates (c.f. <cit.>). Let us now assume that the space is sufficiently high-dimensional, with D̃ dimensions, so that we can achieve the following kinematical configuration:p_i ·ϵ_j= 0, ∀ (i,j) ∈ (1,...,2n) ,ϵ_i ·ϵ_j= 1 if(i,j) ∈{(1,2);(3,4);(5,6);...;(2n-1,2n)} , 0 otherwise. This kinematic configuration can be easily achieved by considering the momentum to live in the first D dimensions (the ones corresponding to the usual dimensionality of space), and the polarizations to live in the extra dimensions.With this kinematical choice, all the polarizations are fixed, and the remaining degrees of freedom are the 2nD-dimensional momenta – exactly that of a 2n-scalar problem. The bosonic string integral (<ref>) simplifies enormously and we get the following single term: 𝒜_2n(1,2,...,2n) ∫^2n z_i/SL(2,ℝ) ∏_i<j z_i,j^2α^' p_i · p_j 1/z_1,2^2z_3,4^2z_5,6^2...z_2n-1,2n^2 = ∫^2n z_i/SL(2,ℝ) 1/z_1,2z_2,3z_3,4… z_2n,1∏_i<j z_i,j^2α^' p_i · p_j_Stringy Tr(ϕ^3) z_2,3z_4,5z_6,7… z_2n,1/z_1,2z_3,4z_5,6… z_2n-1,2n Now the u-variables are defined in terms of the worldsheet coordinates, as the following SL(2,ℝ)-invariant cross-ratios:u_i,j = z_i,j-1z_i-1,j/z_i,jz_i-1,j-1.Using this definition, one can see that the extra ratio in the integrand is exactly equal to (∏_(e,e)u_e,e/∏_(o,o)u_o,o), giving us back (<ref>).Therefore, we now understand that the scalars scattering in (<ref>) are secretly gluons in higher dimensions. Moreover, we see that they only interact with their immediate neighbors, since only ϵ_2i·ϵ_2i-1 are non-zero – which can be interpreted as there being n different species of such scalars that do not mix. Finally, the original cubic gluon interaction gives rise to a cubic interaction between a pair of scalars, (2i,2i-1), and a gluon with the corresponding Feynman rule (see figure <ref>). Therefore, starting from the 2n-scalar scattering, to access the n-point gluon amplitude, we need to take n residues, that put the gluons on-shell, i.e. take residues corresponding to X_1,3 = X_3,5 = … = X_1,2n-1=0. So from this perspective, we think of each gluon in the scattering process as coming from a pair of scalars – the gluons are scaffolded by scalars. This allows us to talk about spin-1 particles in a purely scalar way, which ultimately allows the connection to the simple theory of colored scalars that we started with. See figure <ref> for the case where, starting with a 10-point scalar, we can access the 5-point gluon amplitude, after taking the scaffolding residue. Such 2n-scalar bosonic string amplitudes as well as YMS amplitudes <cit.> in the field-theory limit, were studied in <cit.>.As we explain in <cit.>, the momentum and polarization of the gluons can be determined in terms of the momentum of the external scalars:q_i^μ = (p_2i+p_2i-1)^μ ϵ_i^μ ∝ (p_2i-p_2i-1)^μ,where the momentum of the gluon can be read off directly by momentum conservation and the polarization through the vertex Feynman rule in figure <ref>. In this scalar language, gauge invariance and linearity in the polarizations have their own avatar that is explained in detail in <cit.>.§.§.§ Scaffolding residue In order to extract the gluon amplitude we need to take the residues X_2i+1,2i-1=0. To easily access this residue it is useful to pick a positive parametrization {y_𝒯} corresponding to a triangulation including chords X_2i+1,2i-1. This way the singularity associated with X_2i+1,2i-1=0 comes from the divergence of the integral neary_2i+1,2i-1=0, and the residue of the amplitude turns into the residue of the integrand at y_2i+1,2i-1=0. As it is explained in <cit.>, for such a triangulation, the factor (∏_(e,e)u_e,e/∏_(o,o)u_o,o) simplifies to:∏_(e,e)u_e,e/∏_(o,o)u_o,o⟶1/y_1,3y_3,5… y_1,2n-1×1/∏_(k,m) ∈𝒯^' y_k,m,where 𝒯^' stands for the triangulation of the inner n-gon, with vertices {1,3,5,…,2n-1}, corresponding to the gluon amplitude. And thus the 2n-scalar amplitude becomes:ℐ_2n^δ=∫_ℝ_>0^2n-3∏_i=1^n y_2i-1,2i+1/y_2i-1,2i+1^2∏_I∈𝒯^' y_I/y_I^2 ∏_(a, b) u_a,b^α' X_a,b_Ω_2n ,which is exactly the stringy Tr(ϕ^3) integral where instead of a dlog form, we have y/y^2. So n-point gluon amplitude is then given by the low energy of:ℐ_n^gluon = ∫_ℝ_>0^n-3Res_y_1,3=0( Res_y_3,5=0( …(Res_y_1,2n-1=0( Ω_2n)) …) ) |_X_2i-1,2i+1=0. §.§.§ Zeros The stringy 2n-scalar has all the same zeros and factorizations as stringy Tr(ϕ^3) theory, but in order to understand the zeros/factorizations for gluon amplitudes,we need to study which of these survive after the scaffolding residue.As pointed out previously, to access the scaffolding residue it is useful to pick the underlying triangulation to contain {X_2i-1,2i+1}. To talk about the zeros we will pick the underlying triangulation to be as close as possible to the usual ray-like one by choosing the triangulation of the n-gon to be ray-like. Still, the fact that we have chords {X_2i-1,2i+1} means that the c_i,j appearing in the stringy integral are not exactly those of the usual triangle (see figure <ref> at the end of the paper).We are now going to study the zeros of the gluon amplitude for the case of 10 scalars → 5 gluons. This example is big enough to illustrate the non-trivial features and understand how it generalizes to higher points. At 10 points, let us consider the triangulation of the 10-gon: {X_1,3,X_3,5,X_5,7,X_7,9,X_1,9,X_1,5,X_1,7}, i.e. we have the scaffolding chords and a ray-like triangulation for the inner pentagon. For this choice of triangulation, the resulting region of the mesh is represented in figure <ref> as the shaded region. We see that there are only 3 meshes from the usual triangle that are now missing and instead are replaced by 3 meshes on the top.In figure <ref> we represent the F-polynomials entering the string integral inside the mesh, c_i,j, corresponding to their exponents. We further mark with red dots the scaffolding poles, X_i,j, that we need to take the residue on to localize on the gluon problem. The claim is that effectively, to read off the zeros/factorizations, we should think of the gluon mesh as being the one highlighted in red, as this is the one in which each mesh point is associated with one of the X_odd,odd entering the gluon amplitude. In this case, we should have a 5-point mesh, so we see that each usual square in a scalar mesh gets replaced with 4 squares, in the gluon mesh. In this new picture, to get a zero we need to see exactly the same patterns of meshes to zero, of course, now each individual mesh gets subdivided into four smaller meshes. Thus at 5-point, our usually expected codimension-2 zeros are mapped to codimension 2 × 4 = 8 zeros. This is exactly the codimension we obtained when we phrase them in terms of p_i · p_j, ϵ_i · p_j, ϵ_j · p_i, ϵ_i ·ϵ_j. It is worth noting that this is not a zero of the full 2n-scalar problem, but instead a zero only after taking the scaffolding residue to land on the n-gluon amplitude. Looking back at figure <ref>, one zero would correspond to setting c_i,7=c_i,8=0 for i∈{1,…,4}, or, in the full stringy case, to a negative integer. The reason why the answer vanishes in this limit is because, after the scaffolding residue, it reduces to a sum of integrals of the form:∫_0^∞ y_1,7/y_1,7 y_1,7^X_1,7+ n×( remaining integrations), withn∈ℕ_0,and so, for the usual reason, we get zero from the integration in y_1,7. It is easy to understand why this happens. Note that all the F_i,j inside the zero causal diamond depend on y_1,7, however, there are still other F-polynomials, outside this causal diamond, depending on y_1,7, such as F_i,9 for i∈{1,…,5}. But in all these cases y_1,7 always appears multiplied by some y_i,j associated with a scaffolding variable. Therefore, after taking the scaffolding residue, this dependence will either vanish since we are evaluating at y_scaff=0, or shift y_1,7^X_1,7. In either case, we will have the form (<ref>) after the scaffolding residue. Instead, the F-polynomials that are inside the zero causal diamond are those in which the dependence on y_1,7 is of the form: 1 + y_1,7 + … which are unaffected by the scaffolding residue and thus would not lead to (<ref>).Another possible zero is obtained by setting c_1,j=c_2,j=0 for j∈{5,…,8}, or, in the full stringy case, to a negative integer. In this case, the claim is that, after the scaffolding residue, the amplitude reduces to sums of integrals of the form of (<ref>) but where y_1,7 and X_1,7 get replaced for y_1,5 and X_1,5, and thus the zero comes from the integration in y_1,5. As for the case in y_1,7, all F-polynomials depending on y_1,5 outside this causal diamond are such that y_1,5 always appears multiplied by some y_2i-1,2i+1. Whereas, for all the F-polynomials inside the zero causal diamond, the dependence in y_1,5 is either of the form 1+y_1,5 + …, or y_1,5 appears multiplying one of the remaining y's of the inner n-gon, which in the 5-pt case is only y_1,7. The presence of these last terms would also not lead to (<ref>), and thus why these need to be inside the zero causal diamond. We can translate the locus of zeros we have just phrased in terms of vanishing c's in the more familiar language of dot products between polarization vectors and momenta. For instance consider our first set of zeros, where c_i,7 = c_i,8 = 0 for i=1, ⋯, 4. So e.g. for i=1,2 we have the four constraints p_1 · p_7, p_1 · p_8, p_2 · p_7, p_2 · p_8 = 0. Taking linear combinations of these relations is equivalent to the four statements (p_1 ± p_2) · (p_7 ± p_8) = 0, and given the map between polarization vectors and momenta where e.g. q_j = p_2j - 1 + p_2 j, ϵ_j = (p_2j - p_2j-1), this turns into the statements that q_1 · q_4 = 0, q_1 ·ϵ_4 = 0, ϵ_1 · q_4 = 0, ϵ_1 ·ϵ_4 = 0. This generalizes in the obvious way: every “big” mesh (i,j)is divided into four “small” meshes that are set to zero, and this is equivalent to the statements q_i · q_j=q_i ·ϵ_j = ϵ_i · q_j = ϵ_i ·ϵ_j = 0, just as we observed experimentally in section <ref>.For a general 2n-scalar to n-gluon amplitude, by drawing the effective gluon mesh (represented in red for the 5-point gluon problem in <ref>) the zero causal diamonds are exactly those identified in the usual scalar mesh, where now a square gets replaced by a set of 4 squares. Let us say the zero for one such causal diamond comes from the integration in y_I of the internal n-gon, then the claim is that all the F-polynomials lying inside this causal diamond contain all the F-polynomials in which y_I does not appear multiplying one of the scaffolding y's. §.§.§ Factorizations Once more, since the 2n-scalar amplitude is just a c-preserving deformation of stringy Tr(ϕ^3), all the factorizations of the latter are also true for the former. So, to understand which factorizations of the 2n-scalar turn into gluon factorizations, we need to study which ones survive after the scaffolding residue. The idea is then to find the factorizations in which the scaffolding variables appear in lower point amplitudes so that the residue is non-vanishing. There are two different such cases - either the lower point amplitudes are both odd points or even points.To explain the different possible factorization patterns we can find we will be studying the 10-point scalar problem, as this example is big enough to include all the subtleties we find for general 2n scalars.Let us consider the even-point factorizations of the 10-point scalar amplitude. For even-point factorizations, as long as we ensure that all the scaffolding variables appear on the lower problems, then we have two 2n-scalar lower point amplitudes. In addition, since for even-point factorizations the X_B and X_T associated with the causal diamond we're probing are both X_o,e, the prefactor is the usual 4-point amplitude of Tr(ϕ^3) theory:𝒜_2n^α^'δ =1 ( c_⋆≠ 0 ) →Γ(α^' X_B)Γ(α^' X_T)/Γ(α^'(X_B+X_T))×𝒜_2n_1^α^'δ =1, up×𝒜_2n_2^α^'δ =1, down,where c_⋆ is the mesh we are turning on inside the zero causal diamond, and 2n_1 and 2n_2 are the lower even-point amplitudes, such that n_1 +n_2 = n+1. Let us go back to the 10-point example. In this case, there are two possible ways of factorizing into even-point amplitudes, presented in figure <ref> (center and right mesh). Let us start by looking at the right case corresponding to a square – in this case, the lower point amplitudes are:𝒜_10 (c_⋆≠ 0) →Γ(α^' X_1,6)Γ(α^' X_5,10)/Γ(α^'(X_1,6+X_5,10))×𝒜^down_6×𝒜^up_6.Let us focus on those factorizations that can also be accessed in the field theory limit for gluons, by considering setting all the c's inside the square to zero but for one c_⋆ inside this square to be not zero. Asking for all the scaffolding variables to appear in the appropriately kinematic shifted lower amplitudes forces two columns of the square vanish, and therefore the c's we can choose to turn on are exactly those inside the gluon mesh (see figure <ref>, right). Now note that while in the “up” amplitude taking the scaffolding residue in the original problem, corresponds to taking three residues on this 6-point amplitude and thus exactly producing a 3-point gluon amplitude, the same is not true for the down amplitude. In the 6-point amplitude at the bottom, we are only taking residues in two variables, X_1,3 and X_3,5, not an X_1,5. Therefore we are only producing two gluons and the remaining pair of scalars is left untouched, and so after taking the scaffolding residue, the down amplitude is that of two gluons + two scalars. Of course, if we further take a residue in X_1,5, then we further turn the down amplitude into a 3pt gluon amplitude, and this becomes a pure gluon factorization. The last piece we need to understand is the prefactor, which in the field theory limit becomes:𝒜_10 (c_⋆≠ 0) →(1/X_1,6+1/X_5,10)×𝒜^down_6×𝒜^up_6,so taking the scaffolding residue in each term to turn the scalar amplitudes into gluon amplitudes will not affect the prefactor. This seems then to say that in the end we are left with an answer that has poles when X_o,e→ 0, which is incompatible with the fact that after scaffolding residue we should be left with only poles of X_o,o→ 0, corresponding to chords of the gluon problem. However, note that, crucially, since we are also setting the rectangles, c_i,6=c_i,9=0 with i=1,…,4, we have that X_1,6 = X_1,7 and X_5,10 = X_5,9, allowing us to conclude that after taking the scaffolding residue we get:𝒜_5^gluons (c_⋆≠ 0) →(1/X_1,7+1/X_5,9)×𝒜^down, gluons + ϕ_4×𝒜^up, gluons_3,where this is now an honest factorization of the 5-point gluon amplitude.Another possible zero causal diamond that leads to even lower point amplitudes is the one represented in the central mesh of figure <ref>, where the lower point amplitudes are now 4-point and 8-point. Once more, asking for the scaffolding variables to be present in the lower point problems, forces us to set c_i,4=c_i,9=0 for i=1,2, and thus the c_i,j we choose to turn on lives inside the effective gluon mesh. So the factorization pattern is now in the field theory limit:𝒜_10 (c_⋆≠ 0) →(1/X_1,4+1/X_3,10)×𝒜^down_4×𝒜^up_8. As opposed to the 6×6 factorization, in this case, by taking the scaffolding residue both the 4-point and the 8-point scalar amplitudes turn into pure gluon amplitudes, and similarly, the prefactors change appropriately so that we get an honest 5-point gluon factorization: 𝒜_5^gluon (c_⋆≠ 0) →(1/X_1,5+1/X_3,9)×𝒜^down, gluons_2×𝒜^up, gluons_4. Finally, let us look at the case of an odd-point factorization. At 10-point, one example of this is by considering the skinny rectangle (figure <ref>, left). In this case, the down amplitude is a 3-point amplitude which is trivial, and the up amplitude is 9-point. Asking for the scaffolding variables to be present in the lower point problems, forces c_1,9=0, and by turning on any of the remaining c's inside the skinny rectangle we obtain:𝒜_10 (c_⋆≠ 0) →∫_0^∞ y_1,3/y_1,3^2 y_1,3^α^' X_1,3(1+y_1,3)^-α^' c_⋆_𝒜_4^α^'δ =1×𝒜^up_9,so we see that in this case the 4-point pre-factor becomes the 4-point 2n-scalar amplitude, while for the lower point amplitude, it's just the 9-point amplitude with the extra u_e,e/u_o,o. So in the low energy limit, and further taking the scaffolding residue we get:𝒜_10 (c_⋆≠ 0) X_2,10/X_1,3×𝒜^up_9 ; 𝒜_5^gluon (c_⋆≠ 0) →X_2,10×𝒜^up,scaffolded_9. Similarly to what we saw in the NLSM, we predict that lower odd-point amplitudes generated in these factorizations could be some mixed amplitudes of gluons and scalars. Regardless this is certainly another honest factorization of the gluon 5-point amplitude. The other reason to expect the lower-point object to be a mixed theory amplitude is exactly because the skinny rectangle is closely connected to soft limits, exactly the case in which it was first observed the appearance of these extended theory amplitudes.§ OUTLOOK There are many obvious avenues for exploration following from the observations in this paper, so instead of attempting an exhaustive accounting of all of them, we will highlight a few that seem especially ripe for immediate development. Already at tree-level, it is interesting to ask whether the pattern of zeros suffices to completely determine the amplitude. For both the Tr(ϕ^3) theory and the NLSM, there is an obvious ansatz to make for the n-particle tree amplitude. Combining all the poles into a common denominator, we can assume that the numerator is a polynomial of correct units for each theory, and add the further crucial assumption that this numerator is at most linear in each X_ij variable; this last requirement enforces good behavior in the Regge limit. This still leaves an enormous number of free parameters in the ansatz, but we can further impose our hidden zeroes. Quite remarkably we have found that experimentally for Tr(ϕ^3) amplitudes with n=5,6,7 points and for NLSM amplitudes with n=6,8, imposing the zeros in this way does fully fix the amplitude! For n-point Tr(ϕ^3) amplitude, the number of parameters is n(n-3)/2 choose (n-3), and there are 51 and 6700 parameters for n=6,8 ansatz of NLSM amplitude respectively. Indeed imposing the simplest (cyclic class of) “skinny rectangle” zeros is enough to fully determine the amplitude in all these cases. It would be fascinating to prove this fact in general since this provides an entirely new way to uniquely determine scattering amplitudes, complementary to the traditional picture relying on poles and factorization. Our focus on the numerator structure of the amplitude dovetails with an approach to determining the canonical form of positive geometries <cit.>, first seen in the context of the amplituhedron for N=4 SYM,by looking at pattern of zeros for the numerator demanded by killing illegal singularities/enforcing the “dlog form” structure <cit.>. In this setting, the variety associated with the vanishing of the numerator is called the “adjoint” of the positive geometry. This set of zeros is more closely associated with familiar facts about singularities of amplitudes, while our new zeros reflect something entirely different, the collapsing of the geometry as kinematics are varied. It would be interesting to study the analog of these “collapsing geometry” zeros for the amplituhedron, to begin with even for the simplest case of the best-understood m=2 amplituhedron. It is interesting to phrase the existence of our pattern of zeros in an algebraic-geometric language. If we combine all the diagrams into a common denominator consisting of the product of all the X_i,j, the amplitude is A =N(X)/ D(X), where the numerator N(X) is a degree (n-2)(n-3)/2 polynomial in the X_i,j. The complete locus of zeros of the amplitude is then a complicated variety in the X space defined by N(X) = 0. From this perspective, the hidden zeros tell us something striking about this complicated variety: it contains a large number of linear subspaces of various dimensionalities. This is certainly not a generic property for high-dimensional varieties! It would be fascinating if this “numerator variety” had further special properties. A speculative but intriguing thought is that this variety should be in some sense “maximally nice”. For instance one might hope that the variety is “determinantal”, with N(X) expressible as the determinant of a predictable matrix, in a way that would make all our hidden zeros manifest. In this paper, we have focused on zeroes and factorization at tree level, where at least for the Tr(ϕ^3) theory the amplitude is given by the canonical form for the associahedron. We now know that the integrand for the Tr(ϕ^3) theory at 1-loop is also given by the canonical form of a cousin of the associahedron, also naturally presented as a Minkowski sum of simplices <cit.>. Thus we expect a similar locus in kinematic space where the one-loop integrand has zero/factorization patterns, which would be interesting to flesh out. Of course this locus will in general involve sending kinematic variables involving the loop momenta to zero, and so don't immediately imply zeros for integrated amplitudes. It would be interesting to see if any trace of these zeros survives post-loop integration. Beyond one loop and to all orders in the topological expansion, there are various notions of a loop integrand naturally associated with surfaces, with the simplest “infinite integrand” reflecting the action of the mapping class group and the concomitant infinite repetition of Feynman diagrams/triangulations of the surface. This infinite integrand is also the canonical form of associated polytopes–“surfacehedra”–having infinitely many facets with a fractal structure <cit.>. Surfacehedra are also Minkowski sums, so at least the infinite integrand should also have patterns of zeros and factorization, though the implications of this fact both for naturally finite integrands obtained by truncating the surfacehedra or modding out by the mapping class group must be properly understood. The phenomenon of factorization near zeros is clearly striking, and it would be fascinating to understand if and how it generalizes. We have seen an especially simple geometric understanding of this factorization, by understanding how the associahedron degenerates as Minkowski summands are shut off, so that at the penultimate step before it collapses entirely to lower dimensions, it simplifies drastically to what we described in the introduction as the “sandwich”, with an interval separating two opposite facets. It is natural to wonder whether there are other predictable patterns for the degenerating associahedron as we turn on further c's, that might also have interpretations in terms of factorization. We are aware of one such pattern: if instead of turning on a single c_i_*,j_* in our maximal causal diamond, we turn on an entire strip i.e. c_i_*, k for all k inside the diamond, then the amplitude also factorizes. This is an interpretation in our mesh picture of the “3-splits” for Tr(ϕ^3) amplitudes discovered in <cit.>. It would be interesting to try and interpret this in the language of the associahedron and examine how it might extend to full string amplitudes. More generally it would be interesting to classify the general set of factorization properties for string amplitudes associated with shutting off various patterns of c's. In another vein, it is interesting that starting purely from the NLSM, we are naturally led to discover the mixed π/ϕ amplitudes from the near-zero factorizations. The particular mixed amplitudes we discover in this way are clearly only a tiny subset of all possible mixed amplitudes–for instance, they all contain only three ϕ's. In <cit.> we will show how general NLSM + ϕ^3 amplitudes can be obtained by a simple set of kinematic shifts of the Tr(ϕ^3) amplitudes. Of course, the mixed amplitudes do not have all of our zeros, and so the shifts are not the c-preserving shifts featured in this paper. Nonetheless, this general phenomenon of kinematic shifts generating non-trivial theories from simple ones is a fascinating one, and it would be interesting to see how far it extends. One especially simple example is worth mentioning as an interesting contrast to the shift we have highlighted in this paper. Suppose we shift the g Tr(ϕ^3) amplitudes by X_e,e; o,o→ X_e,e;o,o + δ, X_e,o→ X_e,o, i.e. no ±δ difference. This still removes all the massless poles X_ee,X_oo and leaves us only with poles associated with even-particle scattering amplitudes. It is trivial to see that the low-energy amplitudes for X ≪δ with the deformation are nothing but that of λ Tr(ϕ^4) theory with quartic coupling λ = g^2/δ, augmented with a further tower of higher-dimension operators. It is striking that this seemingly minor but unusual change–the sign difference between even-even and odd-odd shifts–makes all the difference in the world in going from generating the relatively boring Tr(ϕ^4) theory to the much richer and more intricate amplitudes for pions and gluons! As another amusing example, suppose we take the g Tr(ϕ^3) theory but this time shift all 1/X → 1/X + κ. Clearly, the new functions we obtain in this way will still factorize onto themselves on the massless poles, and so still define a consistent set of amplitudes. But for which theory? We can actually determine a Lagrangian for the amplitudes of this theory in a very simple way. At any n, there is the part of the amplitude with no poles at all–purely a contact interaction. At n points, this is given by simply replacing every propagator by κ; since there are Catalan_n-3 many diagrams at n points, this contact interaction is C_n-3κ^n-3 g^n-2. Hence we can identify an interesting non-linear Lagrangian we can call the “Catalan Lagrangian” that gives rise to the amplitudes associated with this shift: L^ Catalan = ∑_n=3^∞ C_n-3 g^n-2κ^n-3 Tr(ϕ^n) = gTr( (√(1 - 4 g κϕ) -1)/(2 κ)). This is again an interesting cousin of the more interesting and surprising linear shift in our paper, which starts from the polynomial Tr(ϕ^3) theory and generates the amplitudes for the non-polynomial Lagrangian describing pion scattering. It would be interesting to map out the space of these possible deformations more systematically. Finally, it is clearly interesting to study the widest class of theories connected by sharing hidden zeros and factorization. The most obvious place to begin is to simply consider our shifted theories with general values for δ. As we have explained for generic fractional δ, the low-energy amplitudes are always those of the NLSM. But for α^'δ being integers, something more interesting happens. As we have seen for α^'δ = 1 we have a theory of massless gluons interacting with the “scaffolding” of the external scalars. It is easy to see that for e.g. α^'δ = 2, while the leading interactions at low-energies are those of gluons coupled to the scaffolding scalars, there are also interactions that must be interpreted as arising from a theory of massless colored particles of spin 2. In general for α^'δ = J, at least kinematically we are describing gluons coupled to a tower of massless colored particles of spin up to J. Of course there are famous theorems <cit.> about the impossibility of consistent theories for massless colored particles of high spin, so at first blush these theories for J>1 should be discredited on physical grounds. But the way they naturally connect to Tr(ϕ^3), NLSM, and Yang-Mills, simply via further continuing the δ deformation, suggests these theories may somehow have a purpose in life, perhaps especially in the limit as δ→∞, where an infinite tower of higher-spin colored particles become massless. It is our pleasure to thank Alfredo Guevara, Daniel Longenecker, Giulio Salvatori, Yichao Tang, Jaroslav Trnka, Ellis Ye Yuan, Yaoqi Zhang for inspiring discussions. The work of N.A.H. is supported bythe DOE (Grant No. DE-SC0009988), by the Simons Collaboration on Celestial Holography, and further support was made possible by the Carl B. Feinberg cross-disciplinary program in innovation at the IAS. The work of C.F. is supported by FCT/Portugal (Grant No. 2023.01221.BD). The work of S.H. is supported in part by the National Natural Science Foundation of China under Grant No. 11935013, 12047503, 12247103, 12225510. JHEP | http://arxiv.org/abs/2312.16282v1 | {
"authors": [
"Nima Arkani-Hamed",
"Qu Cao",
"Jin Dong",
"Carolina Figueiredo",
"Song He"
],
"categories": [
"hep-th",
"hep-ph"
],
"primary_category": "hep-th",
"published": "20231226190000",
"title": "Hidden zeros for particle/string amplitudes and the unity of colored scalars, pions and gluons"
} |
firstpage–lastpage On the Granular Representation of Fuzzy Quantifier-Based Fuzzy Rough Sets [ January 14, 2024 ========================================================================= Star-forming galaxies (SFGs) adhere to a surprisingly tight scaling relation of dust attenuation parameterized by the infrared excess (IRX≡), being jointly determined by the star formation rate (SFR), galaxy size (R_ e), metallicity (Z/Z_⊙) and axial ratio (b/a). We examine how these galaxy parameters determine the effective dust attenuation and give rise to the universal IRX relation, utilizing a simple two-component star-dust geometry model in which dust in the dense and diffuse interstellar medium (ISM) follows exponential mass density profiles, connected with but not necessarily identical to the stellar mass profiles. Meanwhile, empirical relations are adopted to link galaxy properties, including the gas–star formation relation, the dust-to-stellar size relation, as well as the dust-to-gas ratio versus metallicity relation. By fitting a large sample of local SFGs with the model, we obtain the best-fitting model parameters as a function of metallicity, showing that the two-component geometry model is able to successfully reproduce the dependence of IRX on SFR, R_ e, b/a at given Z/Z_⊙, as well as the dependence of power-law indices on metallicity. Moreover, we also retrieve constraints on the model geometry parameters, including the optical depth of birth clouds (BCs), BC-to-total dust mass fraction, BC covering factor of UV-emitting stars, and star-to-total dust disc radius ratio, which all evolve with galaxy metallicity. Finally, a consistent picture of how the star-dust geometry in SFGs evolves with galaxy metallicity is discussed. dust, extinction – Galaxies: evolution – Galaxies: ISM – Galaxies: star formation§ INTRODUCTIONDust, as a crucial component of the interstellar medium (ISM),plays a significant role in driving the cycles of baryons and energy.Firstly,it acts as a channel for converting radiation pressure into mechanical energy and outflows <cit.>. Secondly, dust serves as an important coolant for star and planet formation, and an interface for the formation of molecules<cit.>.Additionally, dust has a profound influence on the spectral energy distributions (SED) of galaxies by absorbing the stellar ultraviolet (UV) to optical radiation andthermally re-emitting at the infrared (IR) wavelengths. This alteration of the SED significantly affects galaxy observables <cit.>. Dust attenuation, defined as the effective sight-line absorption of light by dust in a galaxy (alsoreferred to as `obscuration'),[In contrast, `extinction' refers to the absorption and scattering of the light from a point source by dust along the line of sight.]depends on both the dust content and the geometry between dust and stars. It is closely intertwined with star formation, chemical enrichment and structural growth of the galaxy.Understanding dust attenuation and the regulation by dust of physical processes across cosmic time is crucial not only for improving measurements of intrinsic galaxy properties, but also for unravelling the connections between dust, gas, metals, and stars in accordance with the structural evolution of galaxies <cit.>.For star-forming galaxies (SFGs), various fundamental scaling relations involving gas, metals, and stars have been established up to a redshift of z∼ 10, revealing their rapid evolution across this redshift range. One of these scaling relations is the star formation main sequence, which shows a nearly linear increase in star formation rate (SFR) with stellar mass (M_∗) at a given redshift, while globally decreasing in SFR as redshift decreases <cit.>. The density of star formation in SFGs is regulated by the density of cold gas, following the Kennicutt-Schmidt (K-S) Law (Σ_ gas–Σ_ SFR) <cit.>.Another important scaling relation is the stellar mass–metallicity (M_∗–Z) relation, which reveals an increase in gas-phase metallicity with stellar mass among SFGs. However, this relation also shows a rapid decline in metallicityat fixed M_∗ as redshift increases<cit.>. In addition, SFGs are predominantly characterized by disc morphologies, and their sizes, quantified by the half-light radius (R_ e), typically increase with stellar mass. The stellar mass–size (M_∗–R_ e) relation of SFGs demonstrates a rapid size increase over time for SFGs with a given M_∗ <cit.>.These scaling relations provide a framework for understanding the evolution of and interplay between gas, metals, stars, and structural properties of SFGs from high redshifts to the present day.On the other hand, dust attenuation serves as an indicator of effective dust columns in SFGs, which is closely related to gas density and metallicity. Observationally, dust attenuation in SFGs is often quantified using the IR to UV luminosity ratio, known as the IR excess (IRX≡). Generally, SFGs with higher star formation rates (SFR) tend to exhibit higher infrared luminosities and dust temperatures <cit.>, resulting in increased dust attenuation <cit.>. The dust attenuation in SFGs has been found to positively correlate with stellar mass, SFR, surface density of SFR, UV continuum slope, and gas-phase metallicity <cit.>. However, investigations on local SFGs have revealed that the dependence of dust attenuation on SFR (as well as galaxy inclination angle) weakens with decreasing metallicity <cit.>. Similarly, <cit.>found distinct scaling relations of dust attenuation with SFR at different stellar masses. In the high-mass regime (M_∗>10^10.2 M_⊙), dust attenuation increases with SFR, while in the low-mass regime(M_∗<10^10.2 M_⊙), it decreases with SFR. <cit.> found that the relation between galaxy stellar mass and attenuation evolves moderately at M_∗>10^10.5 M_⊙ over 0<z<2.5,but displays no or little evolution in the low-mass regime over theexamined redshift range. Several studies using different indicators of attenuation have reported this puzzling `no evolution' phenomenon in the stellar mass–dust attenuation relation for SFGs <cit.>.These correlations between dust attenuation and galaxy properties in SFGs demonstrate that dust attenuation is a complex process with multiple factors at play.In the study by <cit.>, we utilized a large sample of local SFGs to investigate the correlation between IRX and other galaxy parameters. By separating these parameters and determining independent correlations, we obtained several key findings:(1) Our analysis contradicted the previous understanding that galaxy stellar mass is the primary factor influencing IRX. We discovered that dust obscuration is not correlated with galaxy stellar mass, once controlling for other, more fundamental drivers. (2) We found that dust attenuation, as indicated by IRX, is determined by a combination of various parameters including SFR (or the indicator of infrared luminosity L_ IR), half-light radius (R_ e), gas-phase metallicity (Z), and galaxy inclination (axial ratio b/a). The relationship between these parameters and IRX can be described by a power-law function:IRX=10^α L_ IR^β R_ e^-γ (b/a)^-δ with the power-law slopesβ, δ, and γ decreasing (i.e. getting closer to zero) as the metallicity decreases.Furthermore, we verified that the empirical relation of dust attenuation obtained from local SFGs also holds out to z∼ 2, demonstrating its universality. This universal IRX relation provides insights into the evolution of galaxies in terms of star formation, chemical enrichment, and morphological structure. However, the mechanism underlying how these three physical processes interact to make galaxies follow this relation remains to be explored. The determination of IRX is influenced by the dust content, star formation activity, and the geometry of dust and stars within a galaxy. It is now evident that the geometry of dust and stars is crucial for understanding the differences between various attenuation measurements and for quantifying the relative contributions that determine the overall dust attenuation of a galaxy.In the past three decades, much effort has been devoted to studying galaxy dust attenuation using star-dust geometry models <cit.>. Two commonly considered geometries are a uniform foreground dust screen and a homogeneous mixture of dust and emitting sources. However, in most cases, these simple geometries fail to adequately describe the observed attenuation patterns in real galaxies. <cit.> investigated the spatially resolved Balmer decrement distribution and its relation to other local and global properties, and concluded that neither the foreground screen nor the homogeneous mixture geometry can accurately represent the observed attenuation patterns <cit.>. In addition, pointed out that while the simple homogeneous geometry can reproduce the IRX relations at Solar metallicity, it fails to reproduce the flat slopes observed at lower metallicities. This suggests that there is no universal star-dust geometry that applies to all types of galaxies. Instead, a more complex and flexible star-dust geometry model is required to better match the observations.One approach to achieving this is by allowing the relative sizes of the dust and stellar discs to freely vary in the geometry model <cit.>. If the stars are more centrally concentrated than the dust disc, they will suffer an enhanced attenuation since the optical depth of the dust disc decreases with galactocentric radius.Another way to address the complexity of star-dust geometry is by considering a `two-component' dust geometry, where the ISM of galaxies consists of dense birth clouds (BCs; associated with short-lived stars) and the surrounding diffuse ISM <cit.>. This two-component geometry successfully explains the difference in attenuation between nebulae (birth clouds) and old stars in galaxies <cit.>. The BCs in a galaxy follow certain distributions of ages and initial masses. If these distributions are similar for each galaxy, it would be expected that the average attenuation of BCs does not depend on either SFR or inclination. On the other hand, IRX measures the attenuation of the total UV light from young and intermediate-age populations. The former reside in the BCs while the latter are no longer surrounded by birth clouds due to stellar feedback. Then the steepness of IRX correlations might be influenced by the relative importance of dust attenuation caused by BC and diffuse dust.In this work, we aim to build a new geometry model based on the two-component prescription with flexible model parameters, to reproduce the observational trends of IRX correlations highlighted in . We focus on the scaling relations of dust attenuation, in particular, on the systematic trends of the steepness of IRX scaling relations with galaxy metallicity. In Section <ref>, we outline the details of our model as well as the impact of model parameters on dust attenuation. In Section <ref>, we describe how to fit the observed data with our geometry model. We show best-fitting results in Section <ref> and discuss the implications in Section <ref>. Finally, we summarize our main results in Section <ref>. § THE STAR-DUST GEOMETRY MODEL §.§ The distribution of dust and stars, and dust opacityWhile star formation does occur in various types of galaxies, including irregular galaxies and elliptical galaxies, the highest rates of ongoing star formation are typically observed in the most abundant disc galaxies which provide better conditions for the birth of stars. For example, ellipticals were found to contribute only 10-13% of the total ongoing star-formation budget <cit.>. We thus focus on disc SFGs and address how dust attenuation is affected by different quantities/processes. We aim to examine the scaling relations of dust attenuation among galaxy populations. It is reasonable to assume that stars and dust are well mixed in the ISM, and the size distribution of dust grains and their chemical compositions are identical everywhere across a galaxy. The ISM in a galaxy can be briefly described with two components: the dense birth clouds (BCs, or star-forming regions) with short-lived young stars, as well as more extended large-scale distributions of diffuse ISM, dust and old stars <cit.>. The large-scale distribution of diffuse dust plays a major role in mediating the propagation of photons in galaxy discs and, at least in nearby galaxies, dominates the bolometric output of dust emission. To simplify, the total dust optical depth can be considered as the sum of the optical depth caused by the diffuse dust (diff) plus the dense BC dust (dense):τ=τ^ diff+τ^ dense.This calculation relies on the assumption that the BCs and young stars share the same geometry distribution and no overlapping effects between BCs along the line of sight are taken into account in our model.A double exponential form is commonly used in astrophysics to capture the observed distribution of matter in galactic discs <cit.>. For the diffuse components, the distribution of diffuse dust ρ_ d and stars ρ_⋆, both residing in a disc configuration, can be described by a double exponential, respectivelyρ_ d(r,h)= ρ_ d(0,0)exp(-r/R_ d-|h|/H_ d) , ρ_⋆(r,h)= ρ_⋆(0,0)exp(-r/R_⋆-|h|/H_⋆) ,where r and h are the radius and height in cylindrical coordinates, while R and H are the scale-length and scale-height of the dust or stellar disc, respectively. We use subscript ⋆ to represent the stellar disc and use subscript d to represent the diffuse dust disc hereafter.We note that the subscript d always refers to the diffuse dust disc. To not be confused, we add superscript tot if denoting the total dust disc (including also BCs). Then the integrated diffuse dust mass is given as M_ d =∫_0^∞∫_-∞^∞ρ_ d(r,h) 2π r dr dz =∫_0^∞ 2 πr dr ∫_-∞^∞ρ_ d(0,0)exp(-r/R_ d-|h|/H_ d) dh =4πρ_ d(0,0) R_ d^2 H_ d . Similarly, the integrated intrinsic luminosity of stellar emission is L_⋆,int =∫_0^∞∫_-∞^∞ρ_⋆(r,h) 2π r dr dh =∫_0^∞ 2 π r dr∫_-∞^∞ρ_⋆(0,0)exp(-r/R_⋆-|h|/H_⋆) dh =4πρ_⋆(0,0) R_⋆^2 H_⋆ . The observed luminosity of stellar emission due to the dust attenuation is L_⋆,obs= ∫_0^∞ 2 π r dr ∫_-∞^∞ρ_⋆(0,0)exp[-r/R_⋆-|h|/H_⋆-τ_ rh] dh ,where τ_rh is the diffuse optical depth along the line of sight towards a given (r, z). For a galaxy observed under a face-on orientation, τ_ rh =∫_h^∞κρ_ d(0,0)exp(-r/R_ d-|h|/H_ d) dh =κΣ_ d/4× e^-r/R_ d(2-e^h/H_ d),if h<0 , e^-r/R_ de^-h/H_ d,if h>0 ,where Σ_ d is the diffuse dust surface density defined as M_ d/(π R_ d^2), and κ is the dust mass extinction coefficient that converts the dust surface/column density into dust optical depth. κ is wavelength dependent because different wavelengths of light have different absorption and scattering cross sections for the same dust grain, e.g. UV light has higher κ than optical. Here we omit the subscript λ to avoid the complexity of the notation hereafter. By definition, the effective optical depth caused by the diffuse dust is τ^ diff = -ln(L_⋆, obs/L_⋆, int) = -ln∫_0^∞r dr/2R_⋆ ^2∫_-∞^∞exp[-r/R_⋆-|h|/H_⋆-τ_rh] dh/H_⋆ ,where τ_ rh=κΣ_ d/4 e^-r/R_ d(2-e^h/H_ d),if h<0 , e^-r/R_ de^-h/H_ d,if h>0 .We then do the following variable substitutions: r^'=r/R_⋆ ,h^'=h/H_⋆ ,R̂=R_⋆/R_ d, Ĥ=H_⋆/H_ d.Equation <ref> can be rewritten as τ^ diff=-ln∫_0^∞1/2r^' dr^'∫_-∞^∞exp[-r^'-|h^'|-τ^'_rh] dh^' , where τ^'_ rh=κΣ_ d/4 e^-R̂ r^'(2-e^Ĥ h^'),if h^'<0 , e^-R̂ r^'(e^-Ĥ h^'),if h^'>0 . The diffuse dust attenuation τ^ diff in our geometry model is only determined by the diffuse dust surface density Σ_ d, star-to-diffuse dust scale-length ratio R̂, star-to-diffuse dust scale-height ratio Ĥ, and the dust mass extinction coefficient κ. We notice that the thickness of both the dust and stellar discs (i.e., H_ d/R_ d and H_⋆/R_⋆) have vanished. The above equations allow translating a given diffuse dust mass and geometry to the effective optical depth seen by a face-on observer.Under an inclined angle, sightlines to individual disc regions will cross dust located at different radii, but as we will demonstrate in Appendix <ref>, the associated net attenuation of the galaxy light can still be approximated adequately using Equations <ref>-<ref>, simply by boosting Σ_d by a factor 1/(b/a). For the dense (or BC) component, we first assume that the BCs are spherical and have similar sizes. We return to this assumption in Section <ref>. Additionally, for two similar size BCs, more metal-rich BCs with higher dust-to-gas ratio are expected to have higher BC optical depth (τ_ bc). The galaxy dust attenuation probed by IRX measures the attenuation of the total UV light from young and intermediate-age populations (up to a few ×10^8 yr). The young stars are surrounded by BCs while the intermediate-age stars no longer live in BCs (due to feedback) but still contribute the UV emission <cit.>. Given the evolutionary picture of BC dispersal, it is natural to introduce a BC covering factor C_ bc, defined as the total fraction of UV emission that is subject to BC attenuation. We note that our covering factor C_ bc is not the same as the clumpiness factor in the literature <cit.>, they assume the UV light is totally obscured in BCs (i.e., τ_ bc≫1) whereas we do not. The optical depth caused by the dense BCs follows τ^ dense =-ln[I_⋆,int(1-C_ bc)+C_ bc× I_⋆,intexp(-τ_ bc)/I_⋆,int] =-ln[1-C_ bc+C_ bc×exp(-τ_ bc)] ,where I_⋆,int denotes the UV light intrinsically emitted by the stellar population.Another important parameter in our two-component model is the BC dust mass fraction F_ bc, defined as the BC-to-total dust mass fraction. It controls the proportion of each component in the two-component model, and hence the evolution of dust geometry. For example, at F_ bc=0 and F_ bc=1, it reverts back to the pure BC-dominated anddiffuse dust-dominated geometry, respectively. Naturally, F_ bc also impacts the relative contribution of dust attenuation caused by BC and diffuse dust. Using the BC mass fraction F_ bc, we are able to link the diffuse dust surface density Σ_ d in Equation <ref> with the observationally more relevant total dust surface density Σ_ d^ tot. The latter is defined as Σ_ d^ tot≡ M_ d^ tot/[π (R_ d^ tot)^2], where M_ d is the total dust mass and R_ d^ tot is the scale-length of the total dust disc. Then we have Σ_ d=(1-F_ bc)M_ d^ tot/π R_ d^2 =(1-F_ bc)M_ d^ tot/π (R_ d^ tot)^2×(R_ d^ tot/R_ d)^2=(1-F_ bc)Σ_ dust^ tot×(R̂^ tot)^2 ,where R̂^ tot= R_ d^ tot/R_ d. In Appendix <ref>, we explain how the size ratio of the total to diffuse dust disc can be written as a function of R̂ and F_ bc.Then the total dust optical depth can be obtainedwithτ^ tot=τ^ diff(κΣ_ d^ tot,F_ bc, R̂, Ĥ)+τ^ dense(τ_ bc, C_ bc) . In summary, the effective dust optical depth in our two-component model depends on the dust extinction coefficient κ_λ, total dust surface density Σ_ _ d^ tot, BC-to-total dust mass fraction F_ bc, star-to-diffuse dust disc scale length ratio R̂ and scale height ratio Ĥ, BC optical depth τ_ bc, and BC covering fraction C_ bc. Other parameters have vanished during the derivation of the above formula.Although the derivation of the above equations is based on young/intermediate-age stellar discs, it also holds for stellar discs at different ages (or a nebular disc) with appropriate changes to the model parameters. The old stellar disc has longer wavelength emission than young stars and therefore has smaller dust opacity (smaller κ and τ_ bc). On the other hand, the old stars are not expected to be surrounded by BCs, and their BC coverage fraction C_ bc should be set to 0. Conversely, the nebular emission traces <10^7yr stellar emission, thus younger than those traced by the UV (or IRX), and is expected to have a higher C_ bc.While in this paper, we focus on the IRX diagnostic of attenuation, the framework outlined in this section will thus also be applicable to the interpretation of, for example, Balmer decrement measurements, and how they contrast to IRX <cit.>.§.§ Dependence of IRX on model parameters The equations derived in Section <ref> give the optical depth τ^ tot of a galaxy, while the attenuation indicator of infrared excess (IRX≡), defined as the ratio between infrared and UV luminosity, is used in our observational analysis. The intrinsic UV emission is dominated by the young (<10 Myr) and intermediate-age (10–500 Myr) stellar populations. For a constant star formation history (SFH), the young stellar populationscontribute56% of the total UV luminosity (1216Å–3000Å), 19% from these at ages of 10-30 Myr, and the remaining 25% stems mainly from stellar populations of ages 30–500 Myr.Note that dust heating by old stars can contribute an increasing fraction to the far-IR luminosity (i.e., cold dust emission) at decreasing specific SFR but the fraction is marginal for normal SFGs <cit.>.Moreover,the total IR luminosity of local SFGs used in our analysis is derived from the mid-IR luminosity(see Section <ref>) in which the contribution fromthe dust heated by old stars is negligible. We thus assign the total IR luminosity to the dust-reprocessed radiation of young and intermediate-age stellar populations, and take IRX to reflect their dust attenuation. A conversion between IRX and FUV optical depth should be made. Following <cit.>, the IRX can be obtained byIRX=(e^τ_ FUV-1)/0.46, where τ_ FUV is the dust optical depth at the FUV according to Equation <ref>.This conversion is based on the energy balance principle in the sense that the absorbed UV-to-optical radiation is totally transformed into the IR through dust thermal emission. Throughout our analysis, we adopt a fixed κ_ V of 0.65× 10^-5 M_⊙^-1 kpc^2 following <cit.> (see Section <ref>).We further adopt the <cit.> attenuation curve, which has τ_ FUV/τ_V=2.5.Here the dust optical depths are associated with the projected dust surface density and thus dependent on viewing angle.Since our IRX estimates are based on the computed τ_ FUV, they too will tightly correlate with the projected dust column density Σ_ dust,proj (see also Appendix <ref>). We caution that in detail, the application of the energy balance principle is only warranted when comparing the UV-to-optical emission versus IR emission as integrated over 4π steradian.For individual viewing angles, deviations from energy balance may occur as the IR radiation tends to be emitted isotropically, whereas the emerging UV light is not.The latter effect is not captured in our modelling.For disc SFGs,the average attenuation over 4π steradian equals that of an inclined SFG with project axial ratio b/a=0.6 (see ).Accounting for the fact that the IR emission is anticipated to emerge isotropically would modestly increase (decrease) IRX for galaxies with higher (lower) b/a than 0.6 compared to the estimates obtained with the approach adopted in this work. Variations in the effective attenuation law, that in practice may arise from star-dust geometry and viewing angle effects, are also not captured.For instance, for the fixed κ_ V adopted in this paper, a greyer (i.e., flatter) attenuation law would give rise to a lower τ_ FUV and thus reduced IRX.We return to this point in Section <ref>. Fig. <ref> shows the model-predicted IRX as a function of Σ_ dust for a set of model parameters. If τ_ bc=0, i.e., in the case of diffuse dust only, IRX increases monotonically with dust surface density. This is consistent with the prediction of a homogeneous mixture star-dust geometry given by , although the quantitative form of the relation shows modest differences due to the non-uniform density distribution of dust and stars (see also figure 1 in ). Once BC attenuation is considered (where for illustrative purposes we here adopt a V-band BC optical depth of τ_ bc,V = 0.5), the Σ_ dust–IRX relation flattens at the low-Σ_ dust end. This is because the BC attenuation is assumed to be constant and independent of the opacity of diffuse dust. From Panel (b) we can see that the attenuation caused by cloud dust depends not only on the adopted BC opacity but also on the BC covering fraction C_ bc. The latter controls the fraction of UV emission that arises from within BCs. The more UV stars are located in BCs, the more dust attenuation. The attenuation caused by diffuse dust is determined by F_ bc, R̂, and Ĥ but independent of C_ bc at a given total dust surface density Σ_ dust. F_ bc controls the remaining fraction of diffuse dust. The higher F_ bc, the less diffuse dust and hence the lower the overall dust attenuation. R̂ and Ĥ control the relative spatial distribution between UV-emitting stars and dust in the galaxy. With decreasing R̂ and Ĥ, the UV emission becomes more centrally concentrated within the dust disc, both radially and vertically. For an exponential dust disc, the central region always has a higher dust column density. We note that R̂ and Ĥ have a very similar impact on the resulting Σ_ dust–IRX relation, and consequently might not be very distinguishable in subsequent model fitting. We will impose R̂=Ĥ in our model as discussed in Section <ref>.In conclusion, in the BC dominated regime, IRX increases with τ_ bc and C_ bc, and has a flat Σ_ dust–IRX relation; in the diffuse dust dominated regime, IRX decreases with F_ bc, R̂ and Ĥ, and has a steep Σ_ dust–IRX relation. The slope of the Σ_ dust–IRX relation is thus not constant over the full range of dust surface densities, but is controlled by the relative importance between BC and diffuse dust attenuation. We notice that the (projected) dust surface density is expected to correlate with the metallicity, SFR, R_ e and b/a <cit.>. This gives a hint that the systematic trends between the power-law slope of IRX as a function of L_ IR or SFR, R_ e and b/a with gas-phase metallicity as shown inmay arise from systematic changes in the relative importance of BC attenuation. § FITTING THE LOCAL SFGS WITH THE GEOMETRY MODEL §.§ The local SFGs and universal IRX relation We carry out our investigation of dust attenuation using the sample and data from , which consists of 32,354 local SFGs. More details about the sample selection and data extraction can be found in . We note that inthe IR luminosities (8–1000 μm) were estimated from the single WISE W4 band. However, we find that the estimated total L_ IR from the combination of W3 and W4 bands have a smaller dispersion than using a single W4 band (see Appendix <ref>). In this work, we therefore update the total IR luminosity by combining the WISE W3 and W4 bands. The IR-to-UV luminosity ratio is referred to as IRX and SFR is also estimated from the combination of IR and UV luminosities following <cit.>.The original universal IRX relation states that IRX is determined by a combination of the infrared luminosity L_ IR,half-light radius (R_ e), gas-phase metallicity (Z) and galaxy inclination (axial ratio b/a). In this work, we replace the observational parameter of IR luminosity by the physically more relevant parameter of star formation rate (SFR). The two parameters are tightly correlated with each other (see figure 2 of ).The best-fitting relation of IRX as a function of Z, SFR, R_ e and b/a obeysIRX =10^α (SFR/ M_⊙ yr^-1)^β (R_ e/ kpc)^-γ (b/a)^-δ , where α=1.46log(Z/ Z_⊙)+0.92 ,β=0.67log(Z/ Z_⊙)+0.55 ,γ=0.95log(Z/ Z_⊙)+0.80 ,δ=1.52log(Z/ Z_⊙)+1.00 .Here, log (Z/Z_⊙)=-8.69 represents the oxygen abundance and Z_⊙ refers to Solar metallicity. The power-law slopes decrease with decreasing metallicity. Moreover, galaxies at z∼ 2 also follow this empirical relation for dust obscuration, supporting its universality. §.§ Determining projected Σ_ dust from metallicity, SFR, R_ e and b/aWe have shown that IRX in the model is controlled by the dust surface density and other model parameters. However, our empirical universal IRX relation shows that IRX depends on metallicity, L_ IR or SFR, size and axial ratio. We infer dust surface density from Z, SFR, R_ e, and b/a in the following steps. At first, dust surface density (Σ_ dust=M_ dust/(π R_ e^2)) can be inferred from gas surface density via the dust-to-gas ratio (DGR) as Σ_ dust=Σ_ gas× DGR . The gas surface density is found to be tightly correlated with SFR surface density, known as the Kennicutt-Schmidt LawΣ_ SFR= ϕΣ_ gas^n ,where n∼1.4 as suggested by <cit.>. Here we do not use a fixed value for K-S slope but let it vary freely in our model. We notice that the `surface density' presented here is defined using a half-light radius,[Here we do not distinguish strictly between the half-light radius and half-mass radius.The SDSS sample only provides optical measurements of the half-light radii, tracing the stellar emission albeit subject to potential light-weighting effects <cit.>.] while the Equations in Section <ref> use disc scale-length instead. The two radii relate as R_ e=1.678 R_∗ for an exponential disc, and when applying Equation <ref> we account for this difference in definition of surface density. Motivated by the power-law relation between stellar and dust radii discussed by <cit.>, we parametrizeR_ e,dust= ψ R_ e,star^p .On the other hand, the dust-to-gas ratio is found to be correlated with gas-phase metallicity in the form of a power-law <cit.> asDGR=ϵ Z^q .<cit.> found that the Z–DGR relation follows a linear relationship, corresponding to a constant dust-to-metal ratio, at high metallicity (12+log(O/H)≳ 8) but breaking down in the lower metallicity regime where dust growth becomes inefficient <cit.>. Given that our sample does not have galaxies with 12+log(O/H)<8, a single power-law Z–DGR relation is used in the following analysis. One should be careful when extrapolating our model to the low-metallicity regime with 12+log(O/H)<8. Again, the power-law slope q is also a free parameter in our model. Putting Equations <ref>, <ref>, <ref>, and <ref> together, we have Σ_ dust =(SFR/πϕψ^2 R_ e^2p)^1/n×ϵ Z^q =μ[SFR/M_⊙ yr^-1(R_ e/ 3 kpc)^-2p]^1/n×(Z/ Z_⊙)^q ,where μ≡ϵ/[(πϕψ^2)^1/n] can be seen as a new normalization parameter since these sub-parameters are degenerate and indistinguishable. Finally, we interpret the axial ratio as a projection effect in determining projected dust surface density, i.e., Σ_ dust^ proj=μ[SFR/ M_⊙ yr^-1(R_ e/ 3 kpc)^-2p]^1/n(Z/ Z_⊙)^q ÷ (b/a) .Here we do not distinguish between the axial ratios of stellar and dust discs, as the difference between the two is not significant. We notice that there is a possible risk to interpreting the axial ratio as a simple projection effect if the galaxy is not an ideal thin disc. We will show that it will not significantly affect our results in Appendix <ref>. Finally, for brevity, we omit the `proj' superscript hereafter. In summary, the (projected) Σ_ dust of galaxies in our observed sample are calculated from four observed parameters: Z, SFR, R_ e and b/a, that used to define the universal dust attenuation relation in .This calculation involves four free parameters: μ, p, n and q.§.§ Metallicity dependence of τ_ 𝐛𝐜, 𝐅_ 𝐛𝐜, and 𝐂_ 𝐛𝐜 Metallicity is the most important parameter in our attenuation model, as it is believed to affect not only the dust surface density but also the star-dust geometry.At first, the BCs in this model are assumed to be identical, i.e., have the same radii and gas masses. Their dust properties are determined by the dust-to-gas ratio of the BC, which is found to be tightly correlated with gas-phase metallicity. Following Equation <ref>, the optical depth τ_ bc can be writtenasτ_ bc=τ_ bc, Z_⊙× (Z/ Z_⊙)^q . On the other hand, how to set the forms of F_ bc and C_ bc as a function of metallicity is a non-trivial task because there are no direct observational constraints to guide us. Some hints can be obtained from the scaling relations of other observed quantities. The specific SFR (sSFR=SFR/M_∗) is found to be anti-correlated with gas-phase metallicity in the form of a power law <cit.>. Generally speaking, SFR traces recent star formation in dense ISM (associated with BC) while the M_∗ traces old stars in the diffuse ISM (associated with diffuse dust). Therefore, we can assume the BC-to-diffuse dust ratio follows a power-lawF_ bc^'=M_ d^bc/M_ d^ diff=ζ× (Z/ Z_⊙)^η ,and by definition, the BC-to-total mass ratio then follows to beF_ bc =1/1+1/F_ bc^'=1/1+[ζ× (Z/ Z_⊙)^η]^-1 .At Solar metallicity, F_ bc, Z_⊙ = 1/(1+1/ζ). We rewrite Equation <ref> by replacing the ζ with F_ bc^ Z_⊙ asF_ bc=1/1+(1/F_ bc,Z_⊙-1) × (Z/ Z_⊙)^-η .On the other hand, C_ bc describes the fraction of UV light emitted from within BCs (relative to the total UV emission).The stellar populations in a galaxy that contribute UV emission can be divided into two categories: the young stars (Y) that form in dense BCs, and the intermediate-age stars (M) that are no longer surrounded by BCs (due to the stellar feedback) but still contribute to the UV radiation. We use Y and M to represent the UV emission from the young and intermediate-age stars, respectively. Similar to F_ bc, we also assume a power-law relation between C_ bc^'=Y/M and metallicity, and by definition,C_ bc=Y/Y+M=1/1+[ξ× (Z/ Z_⊙)^ν]^-1 .Similarly, we have C_ bc,Z_⊙=1/(1+1/ξ) at Solar metallicity and thus haveC_ bc=1/1+(1/C_ bc,Z_⊙-1) × (Z/ Z_⊙)^-ν .§.§ MCMC fitting and reducing the model parametersIn summary, our geometry model determines IRX from metallicity, SFR, R_ e and b/a in the following two steps: for obtaining dust surface density from the observed metallicity, SFR, R_ e, and b/a, we have the model parameters μ, n, p, and q; for obtaining IRX from Σ_ dust, we have parameters κ, τ_ bc,Z_⊙, q, C_ bc,Z_⊙, η, F_ bc,Z_⊙, ν, R̂, and Ĥ. The definition, units and ranges of these model parameters are summarized in Table <ref>. Of them, κ_ V is fixed to 0.62× 10^-5 M_⊙^-1 kpc^2 according to <cit.>. <cit.> convert dust mass surface density to A_V by assuming the observed Milky Way ratio of visual extinction to hydrogen column, and a fixed dust-to-gas mass ratio. They obtain A_V/Σ_ dust=0.67× 10^-5 M_⊙^-1 kpc^2, corresponding to κ_V=0.62× 10^-5 M_⊙^-1 kpc^2.[The small numerical difference comes fromA_V=1.086τ_V.] Again, a <cit.> attenuation curve is adopted with τ_ FUV/τ_V=2.5.The normalization parameter μ in determining Σ_ dust from metallicity, SFR, R_ e and b/a can be constrained in observations. The DustPedia project uses the Herschel Space Telescope to make far-infrared high-resolution observations of 810 nearby galaxies, which is an ideal sample for our Σ_ dust calibrations from observations. By definition in Equation <ref>, the normalization parameter μ is defined as the Σ_ dust at SFR = 1 M_⊙ yr^-1, R_ e,star = 3 kpc, and Z= Z_⊙. We select a sub-sample with a limited range of R_ e around 3 kpc (from 2 to 4 kpc) and 12+log(O/H) around 8.69 (from 8.6 to 8.85) and fit the sample to obtain the best-fitting power-law relation between Σ_ dust and SFR. At SFR=1 M_⊙ yr^-1, we finally obtain the normalization parameter μ≈ 10^5.16 M_⊙ kpc^-2. This value is fixed in our model fitting. We have two structural parameters in the model: the star-to-diffuse dust scale-length ratio R̂ and scale height ratio Ĥ. By analysing the dust profile of edge-on galaxy discs, <cit.> found that both the 500-to-100 μm scale-height ratio (H_500/H_100) and the scale-length ratio (R_500/R_100) are on average larger than unity.Given that the 100 μm and 500 μm emission roughly trace SFR and dust mass, respectively, it suggests that the galaxy star-to-diffuse dust scale-height ratio Ĥ < 1 and the star-to-diffuse dust scale-length ratio R̂ < 1 For simplicity, we let Ĥ≡R̂ in our model fitting.Then, our model has the remaining nine free parameters to be constrained: n, p, q, τ_ bc,Z_⊙, C_ bc,Z_⊙, η, F_ bc,Z_⊙, ν, and R̂. We make use of the Monte Carlo Markov chain (MCMC) sampling method to explore the 9-dimensional parameter space and efficiently characterize the uncertainties associated with the derived parameters. In detail, we consider a Gaussian likelihood function and assume flat priors for our model parameters in the following ranges: n=[0, 3], p=[0, 3], q=[0,3], τ_ bc,Z_⊙=[0,3], F_ bc,Z_⊙=[0,1], η=[-9,9], C_ bc,Z_⊙=[0,1], ν=[-9,9], and R̂=[0.1,1.1]. With the large sample of our local SFGs, we can place considerable constraints on the geometry model with these free parameters. The Python package emcee[<https://emcee.readthedocs.io/en/stable/>] <cit.> is used to perform the MCMC sampling using an affine invariant algorithmproposed by <cit.>. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time. The sampler is initialised around the position in parameter space that gives the best fit to the data with a maximum likelihood method. As shown in Fig. <ref>, we obtain an excellent match to the observed IRX. The 1 σ dispersion of the IRX residual is ∼ 0.19 dex, consistent with the dispersion of empirical power-law fitting applied by . Moreover, this relation derived from local SFGs also holds for galaxies up to z=3. Fig. <ref> also shows the corner plot of the MCMC fit in order to examine the robustness of the best-fitting parameters.We can see that, although there are some degeneracies (e.g. a lower C_ bc tends to couple with a higher τ_ bc in the fitting and vice versa), the model parameters are well constrained. Even in the worst case (e.g. an anti-correlation between inferred C_ bc,Z_⊙ and τ_ bc,Z_⊙), the relative error does not exceed 30%.We point out that our sample includes 37local SFGs with Z<∼ 1/3 Z_⊙ to provide reasonable constraints on these BC model parameters, although the vast majority of our sample galaxies cover the high-metallicity regime where these parameters are hardly constrained. The estimated values of each parameter are listed in Table <ref>.§ RESULTS We show in this section the results of fitting the IRX of local SFGs with our geometry model. We first consider the best-fitting power-law indices of the ISM scaling relations in Section <ref> and the best-fitting geometry parameters as a function of metallicity in Section <ref>. Then, in Section <ref>, we show how our geometry model reproduces the universal IRX relation presented in .§.§ The best-fitting scaling relations in determining Σ_ dust We have three free parameters n, p, and q in determining Σ_ dust based on the galaxy scaling relations. The best-fitting KS-Law power-slope n is 1.84, which is slightly steeper than the value of 1.4 by <cit.> or the revised version of 1.5 by <cit.>. However, the precise KS-Law slope depends on the sample selection and the method used to measure the physical quantities. For example, <cit.> found that the slope increases to 1.92 if a metallicity dependent X_ CO (following the prescription of ) is adopted, consistent with our best-fitting result. Using a somewhat different but physically motivated prescription of X_ CO, <cit.> also found a similarly steep slope of 1.95. In a modelling of observed Balmer decrements using simplified dust geometries, sharing several ingredients in common with our approach, <cit.> likewise concluded that a significantly superlinear KS-slope (of ∼ 1.5) was required to reproduce the observed Hα/Hβ ratios.The best-fitting relation between dust and stellar (r-band) radius follows R_ e,dust∝ R_ e,star^p with the power slope p=0.69. Using the carefully selected sample from the DustPedia project, the best-fitting relation between dust and stellar radius follows R_ e,dust∝ R_ e,star^0.8, very close to our model prediction. This indicates that the growth of the dust disc is slower than the stellar disc. In an inside-out disc growth scenario, the materials that build a stellar disc are primarily accreted from the surrounding environment. In this sense, the properties of a galaxy would be to a large extent shaped by the properties of the accreted material (such as specific angular momentum, accretion rate, etc.). <cit.> found that more gas-rich galaxies have a higher HI-to-stellar-size ratio. Our low-mass, gas-rich galaxies have a higher dust-to-stellar size ratio.A near linear Z–DGR relation (i.e., q∼1) is suggested by our model fitting. This is consistent with the observational results that the DGR is well represented by a power law with a slope of ∼1 <cit.>, and with theoretical expectations due to the dust grain growth <cit.>. We notice that the linear Z–DGR relation may break below a critical metallicity <cit.>. <cit.> defined a critical metallicity regime of 12+log(O/H)=[8, 8.3]. We recall that our sample has metallicities ranging from 12+log(O/H)= 8.3 to 8.9, which is higher than the critical metallicity. On the other hand, <cit.> suggested that a single super-linear relation (slope ∼[1.78,2.45], depending on the adopted metallicity calibration) is sufficient to describe the observed Z–DGR relation at all metallicities. This is based on the chi-square (χ^2) of a linear regression to the sample and does not necessarily imply that the true relationship follows this functional form. If we only focus on the high metallicity regime, as shown in figure 10 of their paper, the dust-to-metal ratio more or less flattens with metallicity, indicating that there is a linear Z–DGR relation in the high metallicity regime. §.§ The best-fitting geometry parametersFor the geometry parameters, including F_ bc, τ_ bc,Z_⊙, C_ bc, and R̂^ tot, we show them as a function of metallicity in Fig. <ref>. We find a birth-cloud optical depth of τ_ bc,V=0.31 at Solar metallicity, which is comparable to the value of 0.33 at same metallicity as given in <cit.>. τ_ bc,V increases with metallicity following τ_ bc∝ Z, i.e., a linear relation. This comes from the best-fitting linear Z–DGR relation, and the τ_ bc in our model is only determined by the dust-to-gas ratio.The BC mass fraction F_ bc increases with decreasing metallicity. It equals 0.14 at Solar metallicity and increases to 0.53 at 1/4 Solar metallicity. This is consistent with the lower-mass/metal-poor galaxies that have younger stellar populations and more clumpy geometry <cit.>. By modelling the inclination dependence of stellar and nebular attenuation together with a dust geometry model, <cit.> suggested that the typical BC mass fraction is 0.3 for an MW-like galaxy. Determining F_ bc in observations is a non-trivial task, as it is not a directly observed quantity. The observed dust SED can be divided into two parts: the hot dust around young stars and the diffusely distributed cold dust <cit.>, very similar to our definition of BC and diffuse dust, respectively. By decomposing the infrared SED of nearby galaxies, <cit.> found that the mass fraction of hot dust is in the range of 0-6%. On the other hand, the molecular gas is often associated with BCs and the atomic gas is associated with the diffuse ISM. The molecular gas fraction (∼ F_ bc) is ∼20% for our local galaxies <cit.>. We do not intend to compare our F_ bc to either hot dust fraction or molecular gas fraction, but note that they span a similar range as our best-fitting F_ bc. The BC covering fraction also decreases with metallicity, with values close to unity at lower metallicity but dropping to 0.84 at Solar metallicity. Massive and metal-rich galaxies are relatively older and have a higher fraction of intermediate-age stars that still contribute to the UV emission but had their surrounding BCs dispersed due to stellar feedback.Similar to F_ bc, there are no direct observational constraints on C_ bc. The dense cloudsat low metallicity not only have a higher fraction of their dust comprised in BCs, but also a higher fraction of UV stars embedded in BCs. In contrast, if there is no BC dust (i.e., F_ bc=0), there are no UV stars surrounded by BCs (i.e., C_ bc=0). The radiation and stellar winds from the newly-born massive stars may clean up the leftover gas in BCs over a timescale of only a few Myr, before the occurrence of supernova (SN) explosions from these stars <cit.>. It is reasonable to expect that intermediate-age stellar populations could exist in metal-poor galaxies although the best-fitting C_ bc∼ 1 at low metallicity suggest the contribution from intermediate-age populations may be negligible. Rather, star formation across a galaxy can reasonably be seen as a constant process over the most recent 500 Myr, possibly independent of the metallicity of the ISM in the galaxy and thus giving metallicity dependence of ν∼ 0, instead of the obtained best-fitting ν=-3.9, which is quite steep.We argue that the best-fittingC_ bc and ν at low metallicity should not be thought of a measure of the UV radiation fraction from within BCs; instead it may encode physics that impacts what fraction of UV light will be subject to BC attenuation. Alternatively, the expelling of gas and dust driven by stellar feedback might become inefficientin metal-poor BCsand the expelled clouds could remain clumpy and act similarly to BCs in attenuating the UV light.We notice that at low metallicity these model parameters might be biased by our assumption of R̂=Ĥ (see Section <ref> for more details). The best-fit star-to-diffuse dust radius ratio R̂ is ∼0.53 in our model, which suggests that the stellar disc has a smaller radius than diffuse dust. The star formation often takes place in more central regions that feature a higher dust/gas density, while the diffuse dust extends further into the galaxy outskirts <cit.>.We notice that the observed dust emission consists of both dense/BC and diffuse dust. To compare with observations, we calculate the star-to-total dust radius ratio R̂^ tot based on R̂ and F_ bc (see Appendix <ref>). We can see that the star-to-dust radius R̂^ tot decreases with decreasing metallicity. In the case of extremely metal-poor environments (below the metallicity coverage of our sample), F_ bc is expected to be close to unity and the `total dust' is dominated by BC dust, which traces the stars.In other words, BC dust and total dust discs are the same one, i.e, R̂^ tot=1. Conversely, in the high-metallicity limit, F_ bc approaches zero, diffuse dust plays a dominant role and therefore R̂^ tot=R̂=0.52. By examining the radial distribution of dust, stars, gas, and SFR in a sub-sample of 18 face-on spiral galaxies extracted from the DustPedia sample, <cit.> found the average R̂^ tot=R_ SFR/R_ dust^ tot is ∼0.57 (see their table 7). We found the average metallicity of their sample is 8.6 and a consistent R̂^ tot of 0.53 is predicted in our model at that metallicity.[Here the gas-phase metallicity is taken from <cit.> with a consistent PP04-N2 calibration.]§.§ Comparing with the Universal IRX Relation presented inThe main goal of this paper is to understand the physical origins of the universal IRX relation presented in , particularly the systematic flattening of power slopes towards lower metallicity. Given a set of best-fitting model parameters, we have IRX (Z, SFR, R_ e, b/a). Then the IRX–Z relation can be predicted with our model at given SFR, R_ e and b/a. As shown in , besides the metallicity term, the power-law indices of SFR, R_ e and b/a in the power-law relations also contain a dependence on metallicity. One way to eliminate the metallicity dependence of other parameters and obtain a `pure' IRX–Z relation is by normalising SFR, R_ e and b/a to unity. For consistency, we apply the same normalisation of SFR,R_ e and b/a to unity in our model. We compare our thus obtained model-predicted IRX–Z relation against the relation given by(who similarly normalised SFR, R_ e and b/a to unity) in the first panel of Fig. <ref>. We can see that our model perfectly reproduces the IRX-Z relation given by . A power-law relationship adequately describes the observed IRX–Z relation.Similarly, the SFR-IRX relation can be obtained from the best-fitting model by fixing metallicity, R_ e and b/a. We notice that the slope of the empirical SFR–IRX relation is not constant but decreases with decreasing SFR.[Keeping in mind that the Σ_ dust in our model is determined by the combination of Z, SFR, R_ e and b/a in the forms of power-laws.] One way to reduce this to a single measure of slope is adopting the slope of the line tangent to the curve at some representative SFR. We divide our sample galaxies into continuous metallicity bins with a constant bin width of 0.1 dex. In each metallicity bin, we calculate the average metallicity, SFR, R_ e and b/a. With these averaged values, we obtain a model-predicted SFR-IRX relation and then measure the slope of the tangent line to the curve at the average SFR (i.e., β). Repeating the calculation to all metallicity bins, the β as a function of metallicity is obtained (top-right panel of Fig. <ref>).With a similar approach, the power slope of the R_ e–IRX relation (γ) and b/a–IRX relation (δ) as a function of metallicity can be also determined, respectively. As shown in the remaining panels of Fig. <ref>, all the derived power-law slopes β, γ, and δ from our best-fitting two-component geometry model agree perfectly with that inas a function of metallicity.We notice that the dispersion of IRX residuals decreases slightly from 0.188 dex in the empirical power-law fitting to 0.185 dex in the geometry model fitting. That is, the IRX relations are described slightly better by our geometry model than by the power-law forms. The latter should be regarded as an approximate description over a certain dynamic range of observed parameters.§ DISCUSSION§.§ Constant τ_ bc at a given metallicityWhile the IRX caused by diffuse dust monotonously increases with Σ_ dust, adding a BC component with constant optical depth can be very effective in flattening the slope of IRX relations as shown in Fig. <ref>. In other words, the key point to capture the systematic change in IRX relation slopes is that we assume the optical depth of BCs is uniquely set by metallicity but is independent of other galaxy properties, such as SFR, R_ e and b/a. It is natural to question whether the average τ_ bc may depend on other galaxy properties than metallicity instead.A galaxy consists of numerous (of the order of hundreds to millions) BCs, that follow certain distributions of ages and initial masses (determined by the star-forming process). If these distributions are universal, then the average optical depth of the BCs should be similar in each galaxy. In contrast, for example, if the BC mass function and stellar IMF depend on the global properties of galaxies, such as their SFR or gas surface density, then one might expect that the average dust opacity is correlated with such global properties. <cit.> argued that it is reasonable to assume the optical depth of BCs (they call it `clumps') is only decided by the metallicity or dust-to-gas ratio, since the average electron density and size distribution of H II regions (associated with BCs) are similar for all types of local SFGs <cit.>. However, <cit.> found that the power-law slope of the Hα luminosity function of extragalactic H II regions decreases with the galaxy star formation rate surface density Σ_ SFR.This would correspond to an enhanced clustering of young stars at high gas surface densities. If the star formation efficiency is similar among the BCs, the higher SFR surface density galaxies would then have more massive BCs on average. We note that more massive BCs do not necessarily have a higher column density or optical depth.The masses and sizes of giant molecular clouds (GMC) in our Milky Way are found to follow a relation of M_ GMC∝ R^2, i.e., a constant surface/column density <cit.>. Mapping CO emission at the cloud-scale resolution for ten nearby galaxies, <cit.> likewise found the extragalactic GMCs to follow a mass-size relation of M_ GMC∝ R^2. Using the data of average Σ_ GMC of nearby galaxies given by <cit.>, we find that, within uncertainties, the average Σ_ GMC changes little with either global SFR or gas surface density. In brief, there is no evidence that the average τ_ bc of nearby galaxies should be correlated with the galaxy properties, particularly the SFR or gas surface density. It is a good approximation that the optical depth of BCs is only determined by metallicity without additional dependencies on other galaxy properties <cit.>.Exceptions may occur in the more extreme environments of local ultra-luminous infrared galaxies (ULIRGs, ) or higher-redshift galaxies which are substantially more gas-rich and surface dense in a galaxy-averaged sense, and also feature enhanced local densities <cit.>.§.§ Physical origins of the universal IRX relationOne main goal in this work is to explore the physical origins of the universal IRX relation, particularly the systematic changes in the slopes of IRX as functions of SFR, R_ e, and b/a with metallicity. To do so, we built up a new `two-component' dust geometry model with flexible parameters and performed a model fitting to the observed data.Our model fitting shows consistent results with the power-law fitting presented in , including the systematic changes in the power-law slope of IRX with metallicity, SFR, R_ e and b/a (see Fig. <ref>).We present a schematic diagram in Fig. <ref> to illustrate our result of how the dust geometry evolves with metallicity.At first, the BC attenuation is assumed to be constant (only determined by metallicity) while the diffuse dust attenuation increases with Σ_ dust (see bottom panels).The latter is determined by the combination of Z, SFR, R_ e, and b/a in the forms of power-laws. The changes in the slopes of the total Σ_ dust–IRX relation are controlled by the competition of the BC and diffuse dust attenuation, which is regulated by the changes in metallicity. At low metallicity, there is less diffuse dust and the BC attenuation dominates. In this case, one can expect that the total dust attenuation is only controlled by the gas-phase metallicity (changing dust-to-gas ratio) and independent of either SFR, disc size or inclination. This is why we see a flat Σ_ dust (SFR, R_ e, b/a)–IRX relation at lower metallicity in the bottom-left panel of Fig. <ref>. At high metallicity, there is a large amount of diffuse dust, both due to an increasing total amount of dust and a higher diffuse dust fraction. A fraction of UV stars lose their surrounding BCs due to the stellar feedback (the empty circle), and the attenuation caused by the BCs will be somehow reduced.Although the BC dust opacity moderately increases due to the increased dust-to-gas ratio at higher metallicity, the diffuse dust attenuation still plays a dominant role. The total dust attenuation now becomes sensitive to the changes in global dust surface density that are decided by SFR, metallicity, size and inclination.This is why we see a steep Σ_ dust(SFR, R_ e, b/a)–IRX relation at high metallicity in the bottom-right panel of Fig. <ref>. Unlike SFR, R_ e, and b/a, the variations in galaxy metallicity do not only change the galaxy-averaged (projected) dust surface density through the associated dust-to-gas ratio, but also alter the dust geometry in galaxies. Low-metallicity galaxies have a lower dust surface density and a BC-dominated or clumpier dust geometry. The diffuse starlight embedded in a clumpy geometry will more easily escape and BC clouds themselves feature a lower opacity, altogether leading to a smaller net attenuation.In other words, the positive Z–IRX relation in observations is shaped by the combination of a positive Z–Σ_ dust relation as well as an anti-correlation between metallicity and clumpiness of the geometry. While galaxy stellar mass is a crucial physical parameter governing the galaxy formation process, gas-phase metallicity directly measures the gaseous metal abundance and indirectly probes the dust abundance of the ISM, where star formation and dust obscuration are intimately connected. Therefore, a measure of dust attenuation such as IRX is more closely linked to the specific property of gas-phase metallicity than the absolute amount of stellar mass. As mentioned in Section <ref>, the universal IRX relation involves SFR, R_ e, Z/Z_⊙, and b/a, but not M_∗. The dependence on b/a is solely attributed to the inclination effect, while SFR, R_ e, and Z/Z_⊙ are all related to the ISM of SFGs. Here, R_ e is measured from the stellar light profile but is used as a proxy for the spatial distribution of the ISM and young stellar populations (seefor more discussion). It is important to emphasize that the dust attenuation parameter IRX is derived from the fraction between the absorbed and unabsorbed bolometric luminosity of the young and intermediate-age stellar populations that are tightly associated with the ISM, rather than the older stars that dominate the stellar mass in SFGs.The geometry parameters in our model, such as the star-to-total dust scale-length ratio R^ tot, are set to be related to R_ e of a galaxy and thus are connected with stellar mass. Our model fitting results show that R^ tot moderately decreases with metallicity. Given that metallicity is correlated with stellar mass, the best-fitting R^ tot as a function of metallicity might also reflect its dependence on stellar mass. For general populations of disc-dominated SFGs, the ISM is globally associated with older stellar populations in terms of its geometry. In conclusion, our study shows that our model parameters as a function of metallicity are able to unveil the processes that regulate the universal IRX relation.§.§ Explaining the no evolution of mass–attenuation relationRecent studies found that the relation between dust attenuation and stellar mass evolves mildly with cosmic time up to z∼3 <cit.>. However, it introduces a `non-evolution puzzle' since either the SFR, ISM content, size, and metallicity evolve significantly with redshift at fixed stellar mass. Quantitatively, <cit.> found the dust surface density (estimated either via direct or indirect methods) to increase by a factor of ∼4 from z=0 to z=2.3 at given M_∗ while the dust attenuation remains unchanged. This means that the dust attenuation per unit of Σ_ dust (hereafter `dust attenuation efficiency') should decrease towards higher redshifts. They propose several potential origins at z∼2.3: 1) a smaller dust-to-metal ratio (steeper Z–DGR relation); 2) a dust distribution that is relatively more extended compared to the stars; 3) clumpier dust distributions; and 4) a lower dust mass extinction coefficient κ_λ.Of them, 2) is firstly excluded since high-z galaxies usually have smaller far-IR sizes than their local counterparts <cit.>.[<cit.> note that a low galaxy-integrated attenuation can also be achieved by adopting a more compact (rather than a more extended) distribution of dust relative to the stars, thus reducing the column to all but the most central stars.They evaluate however that the factor by which dust discs ought to be smaller is unrealistically large in such a scenario, and therefore favour the clumpy scenario too.] In our geometry model, both the Z–DGR relation and κ are fixed, i.e., 1) and 4) are not included in our model. In other words, through the modelling study in this work, only the clumpier geometry at high-z is favoured. High-z galaxies usually have lower metallicity than local galaxies, and they are expected to have a clumpier geometry as shown in Fig. <ref>. In this case, more dust is in the form of BCs rather than smoothly distributed in the ISM. The decrease of diffuse dust causes a decrease in dust attenuation, while the increasing BC dust only increases the number of BCs but does not affect the global dust attenuation. On the other hand, if the BC covering fraction is not 100%, the more clumpy geometry means that a fraction of starlight will escape directly from the galaxy without encountering clumps and suffers little dust attenuation. Both effects lead to a smaller dust attenuation at a given dust surface density.The interpretation in terms of a more clumpy (i.e. BC-dominated) dusty geometry is also in line with the reduced (or even absent) inclination dependence of attenuation in high-z SFGs as observed out to cosmic noon by <cit.> and <cit.>, and beyond cosmic noon by <cit.>.Motivated by the universality of our IRX relation, we predict the mass–Σ_ dust relation and mass–attenuation relation at different redshifts based on the best-fitting model as well as the evolution of galaxy scaling relations, including the mass–SFR relation <cit.>, the mass–metallicity relation <cit.>, and the mass–size relation <cit.>. The results are shown in Fig. <ref>. We remind the reader that such predictions are based on the assumption that all of the K–S Law, Z–DGR relation, R_ d–R_⋆ relation and metallicity-dependence of dust geometry changes only mildly across cosmic time. We can see that, although a significant evolution of Σ_ dust at given M_∗ is present, the dust attenuation evolves mildly at low and intermediate masses. This indicates that a clumpy geometry at high-z is sufficient to explain the puzzle of significant evolution in Σ_ dust but no evolution in dust attenuation <cit.>. Besides,our model predicts a moderate evolution of dust attenuation at M_∗>10^10.3 M_⊙. This is consistent with <cit.> who found the IRX evolves mildly at M_∗<10^10.5 M_⊙ but evolves moderately at higher masses. However, <cit.> did not find a moderate evolution of dust attenuation traced by the Balmer Decrement at the high-mass end. We notice that the sample used for <cit.> is not `very' massive, and the majority of galaxies have M_∗ less than ∼10^10.5 M_⊙ <cit.>. Another possible reason might be attributed to the difference between the Balmer decrement and IRX indicators. The former traces nebular emission attenuation, while the latter traces the stellar emission attenuation <cit.>. More efforts should be made to give a self-consistent explanation. Such an analysis, however, goes beyond the scope of this work. Given that both distant and nearby SFGs conform to the universal IRX relation, it is intriguing to explore the evolutionary trajectory of a galaxy from high redshift to the present day. Extracting such an evolutionary track from the IRX-mass-redshift relation depicted in the middle panel of Fig. <ref> requires connecting data points representing low-mass SFGs at z=3 with more massive SFGs at lower redshifts.Following the methodology outlined by <cit.> to retrieve the evolutionary paths of a Milky-Way-like galaxy from z=3 to the present day, we show in Fig. <ref> their derived histories of star formation rate (SFR), stellar mass (M_∗), effective radius (R_ e), and metallicity (Z/Z_⊙). Additionally, for comparison, we present the corresponding best-fitting parameters (τ_ bc,V, F_ bc and C_ bc) of our two-component star-dust geometry model, as well as the model-predicted IRX. It can be observed that, in general, the IRX curve initially follows the SFR curve, with deviations likely resulting from metallicity- and structure-related processes. A striking transition occurs roughly around z∼ 1.5 from the metal-poor and compact progenitor to the metal-rich and extended disky galaxy. §.§ Caveats For simplicity, we have utilized the Calzetti Law as the fixed galaxy dust attenuation curve in our modelling. Additionally, we have assumed that the star-to-diffuse dust disc scale-length and scale-height ratios follow each other as R̂ =Ĥ. The Calzetti Law has been successfully applied in our modelling of the universal IRX relation for the overall population of SFGs. It is important to note that the dust attenuation curve may differ for various galaxy populations. For example, the Milky Way, Large Magellanic Cloud, and Small Magellanic Cloud exhibit different curves. Previous studies have reported variations in the slope of the dust attenuation curves for different galaxy samples, which can be either steeper than the Calzetti Law <cit.> or shallower <cit.>. The slope of the dust attenuation curves and dust attenuation were reported to be anti-correlated <cit.>, likely influenced by degeneracies in galaxy spectral energy distribution (SED) fitting <cit.>. Furthermore, the inclination of galaxies has been found to affect the shape of the observed attenuation curve, with edge-on galaxies exhibiting a grayer attenuation curve compared to face-on galaxies <cit.>. These findings suggest that incorporating flexibility to the dust attenuation curve could further improve the realism with which observations are reproduced by our modelling, and could provide insights into the factors influencing the slope of the effective dust attenuation curve. A greyer curve, as reported for highly-inclined (i.e., edge-on) SFGs <cit.>, may result in a lower τ_ FUV at a given IRX, as a relatively larger portion of the dust-absorbed energy that is reprocessed into the IR would originate from stellar emission at wavelengths longward of the FUV regime. In such cases, the use of the Calzetti Law in our modelling could lead to an overestimation of τ_ FUV. However, we anticipate the model parameters best-fitting the observed IRX, metallicity, SFR, and R_ e to remain largely unchanged.We note that the highly-inclined galaxies represent only a small fraction (2.5% for SFGs with b/a<0.2) of our sample of SFGs, leaving the results of our model fitting unaffected. We acknowledge that our model does not have the capability to determine the slope of the dust attenuation curve as enabled for example by the radiative transfer approaches employed by <cit.>, but note that this also falls beyond the scope of this work. Our assumption that R̂ =Ĥ, meaning that the stellar disc and the dust disc mirror each other in shape with a scaling factor, ensures that the axial ratios of the stellar and dust components are identical. This assumption simplifies the calculation of inclination-related effects and is a crucial parameter for our two-component star-dust geometry model. Our results demonstrate that this assumption holds well on a global level. However, in real galaxies, the values of R̂ and Ĥ can vary within a range and may not necessarily mirror each other on an individual basis. For example, massive disc galaxies like NGC 891 often exhibit a thick stellar disc and a thin dust disc, resulting in an apparent dust lane when viewed edge-on. Such variations in R̂ and Ĥ can introduce scatter in the dust surface density (Σ_ dust) relative to star formation and, consequently, in dust attenuation. For metal-poor galaxies, their morphologies tend to be spherical or rounder compared to regular discs, resulting in a higher Ĥ relative to R̂. Such star-dust geometries significantly reduce the inclination-dependent effect on dust attenuation (). Metal-poor galaxies may thus require distinct model parameters R̂ and Ĥ compared to those determined predominantly by the metal-rich galaxies in our sample. The fixed R̂=Ĥin our model fitting may bias C_ bc towards unity in order to dramatically reduce the inclination-dependent effect at low metallicity. On the other hand, the dust attenuation indicator, IRX, is primarily influenced by the radiation from young and intermediate-age stellar populations, which are closely associated with the dust disc. IRX is less sensitive to the contribution from old stellar populations. It is important to note that a comprehensive analysis of dust attenuation should consider the geometric differences between stars and dust, accounting for possible variations in both R̂ and Ĥ. Dust attenuation in our model accounts for absorption of light and does not take into consideration the effect of scattering. Light scattering would introduce additional light for galaxies seen face-on and less for galaxies seen edge-on. We believe this effect plays a secondary role in affecting the relationship between IRX and effective dust surface density. In contrast, the orientation effect of increasing projected dust columns with increasing inclination, which can be accurately quantified, has a more significant influence on this relationship. § SUMMARYIn this work, we have developed a new two-component dust model to describe the universal dust attenuation relation in local galaxies. We fit our model to the same SFG data exploited inusing a Bayesian MCMC sampling. Our main findings are summarized as follows: * Our model produces a good fit to the observational data, and successfully reproduces the systematic changes in IRX scaling relations over the entire metallicity range as presented in . * We give constraints on three galaxy scaling relations through model fitting, including the star-formation law of Σ_ SFR∝Σ_ gas^1.84, the dust-stellar radius relation of R_ e,dust∝ R_ e,star^0.69, and the metallicity-dust/gas relation of DGR∝ Z. All of these relations are consistent with the observational studies. * The evolution of dust geometry as a function of gas-phase metallicity is also constrained quantitatively. As metallicity increases from 1/3 Solar to Solar, the star-to-total dust disc scale-length ratio decreases from 0.69 to 0.53, the optical depth of birth clouds increases from 0.11 to 0.34, the birth cloud mass fraction decreases from 0.42 to 0.14, and the BC covering fraction of UV light decreases from ∼1 to 0.75. Low-metallicity galaxies not only have a smaller dust surface density but also a clumpier geometry. * We find the variations in the slopes of IRX with SFR, R_ e and b/a stem from the competition between BC and diffuse dust attenuation in galaxies, which is controlled by the galaxy metallicity. When a galaxy is metal-rich, there is a large amount of diffuse dust in its ISM, and IRX increases with the global dust surface density; At low-metallicity, the birth cloud attenuation becomes important and even dominant, and then the IRX becomes insensitive to the changes of either SFR, R_ e or b/a. § ACKNOWLEDGMENTS We are grateful to the anonymous referee for valuable comments and suggestions that improved this paper. This work is supported by the National Science Foundation of China (12073078, 12233005, 12173088 and 12033004), the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A02, CMS-CSST-2021-A04 and CMS-CSST-2021-A07, and the Jiangsu Funding Program for Excellent Postdoctoral Talents (2022ZB473). XZZ thanksthe CAS South America Centre for Astronomy (CASSACA) for the hospitality of a three-month visit. S.W. acknowledges support from the Chinese Academy of Sciences President's International Fellowship Initiative (grant no. 2022VMB0004). A.K. has been supported by the 100 talents program of Sun Yat-sen University. V.G. gratefully acknowledges support by the ANID BASAL project FB210003 andfrom ANID FONDECYT Regular 1221310.§ DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the corresponding author. 99[Ahn et al.2014]Ahn2014 Ahn C. P., et al., 2014, ApJS, 211, 17 [Aoyama et al.2017]Aoyama2017 Aoyama S., Hou K.-C., Shimizu I., Hirashita H., Todoroki K., Choi J.-H., Nagamine K., 2017, MNRAS, 466, 105 [Asano et al.2013]Asano2013 Asano R. S., Takeuchi T. T., Hirashita H., Nozawa T., 2013, MNRAS, 432, 637 [Baes et al.2020]Baes2020 Baes M., et al., 2020, A&A, 641, A119 [Baldwin, Phillips, & Terlevich1981]Baldwin1981 Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5 [Ballesteros-Paredes, D'Alessio, & Hartmann2012]Ballesteros-Paredes2012 Ballesteros-Paredes J., D'Alessio P., Hartmann L., 2012, MNRAS, 427, 2562 [Barro et al.2017]Barro2017 Barro G., et al., 2017, ApJ, 840, 47 [Battisti, Calzetti, & Chary2016]Battisti2016 Battisti A. J., Calzetti D., Chary R.-R., 2016, ApJ, 818, 13 [Bell et al.2005]Bell2005 Bell E. F., et al., 2005, ApJ, 625, 23 [Bolatto, Wolfire, & Leroy2013]Bolatto2013 Bolatto A. D., Wolfire M., Leroy A. K., 2013, ARA&A, 51, 207 [Calabrò et al.2017]Calabro2017 Calabrò A., et al., 2017, A&A, 601, A95 [Calzetti, Kinney, & Storchi-Bergmann1994]Calzetti1994 Calzetti D., Kinney A. L., Storchi-Bergmann T., 1994, ApJ, 429, 582 [Calzetti et al.2000]Calzetti2000 Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682 [Casasola et al.2017]Casasola2017 Casasola V., et al., 2017, A&A, 605, A18 [Catinella et al.2018]Catinella2018 Catinella B., et al., 2018, MNRAS, 476, 875 [Charlot & Fall2000]Charlot2000 Charlot S., Fall S. M., 2000, ApJ, 539, 718 [Chen et al.2020]Chen2020 Chen B.-Q., et al., 2020, MNRAS, 493, 351 [Chiang et al.2023]Chiang2023 Chiang I.-D., et al., 2023, MNRAS, 520, 5506 [Clark et al.2018]Clark2018 Clark C. J. R., et al., 2018, A&A, 609, A37 [Conroy2013]Conroy2013 Conroy C., 2013, ARA&A, 51, 393 [Curti et al.2023]Curti2023 Curti M., et al., 2023, arXiv, arXiv:2304.08516 [da Cunha et al.2010]da Cunha2010 da Cunha E., Eminian C., Charlot S., Blaizot J., 2010, MNRAS, 403, 1894 [Dale & Helou2002]Dale2002 Dale D. A., Helou G., 2002, ApJ, 576, 159 [Davies et al.2017]Davies2017 Davies J. I., et al., 2017, PASP, 129, 044102 [Davies et al.2021]Davies2021 Davies R. L., et al., 2021, ApJ, 909, 78 [De Vis et al.2019]De Vis2019 De Vis P., et al., 2019, A&A, 623, A5 [Draine & Li2007]Draine2007a Draine B. T., Li A., 2007, ApJ, 657, 810 [Draine et al.2007]Draine2007b Draine B. T., et al., 2007, ApJ, 663, 866 [Elbaz et al.2011]Elbaz2011 Elbaz D., et al., 2011, A&A, 533, A119 [Erb et al.2006]Erb2006 Erb D. K., Shapley A. E., Pettini M., Steidel C. C., Reddy N. A., Adelberger K. L., 2006, ApJ, 644, 813 [Feldmann2015]Feldmann2015 Feldmann R., 2015, MNRAS, 449, 3274 [Foreman-Mackey et al.2013]Foreman-Mackey2013 Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306 [Förster Schreiber & Wuyts2020]ForsterSchreiber2020 Förster Schreiber N. M., Wuyts S., 2020, ARA&A, 58, 661[Fujimoto et al.2017]Fujimoto2017 Fujimoto S., Ouchi M., Shibuya T., Nagai H., 2017, ApJ, 850, 83 [Galliano, Galametz, & Jones2018]Galliano2018 Galliano F., Galametz M., Jones A. P., 2018, ARA&A, 56, 673 [Garn & Best2010]Garn2010 Garn T., Best P. N., 2010, MNRAS, 409, 421 [Giannetti et al.2017]Giannetti2017 Giannetti A., et al., 2017, A&A, 606, L12 [Goodman & Weare2010]Goodman2010 Goodman J., Weare J., 2010, CAMCS, 5, 65 [Guo, Zheng, & Fu2013]Guo2013 Guo K., Zheng X. Z., Fu H., 2013, ApJ, 778, 23 [Gómez-Guijarro et al.2022]Gomez-Guijarro2022 Gómez-Guijarro C., et al., 2022, A&A, 658, A43[Gómez-Guijarro et al.2023]Gomez-Guijarro2023 Gómez-Guijarro C., Magnelli B., Elbaz D., Wuyts S., Daddi E., Le Bail A., Giavalisco M., et al., 2023, A&A, 677, A34 [Hahn et al.2022]Hahn2022 Hahn C., et al., 2022, ApJ, 926, 122 [Hao et al.2011]Hao2011 Hao C.-N., Kennicutt R. C., Johnson B. D., Calzetti D., Dale D. A., Moustakas J., 2011, ApJ, 741, 124 [Issa, MacLaren, & Wolfendale1990]Issa1990 Issa M. R., MacLaren I., Wolfendale A. W., 1990, A&A, 236, 237 [James et al.2002]James2002 James A., Dunne L., Eales S., Edmunds M. G., 2002, MNRAS, 335, 753 [Johnson et al.2007]Johnson2007 Johnson B. D., et al., 2007, ApJS, 173, 392 [Kaviraj2014]Kaviraj2014 Kaviraj S., 2014, MNRAS, 437, L41[Karim et al.2011]Karim2011 Karim A., et al., 2011, ApJ, 730, 61 [Katsianis et al.2020]Katsianis2020 Katsianis A., et al., 2020, MNRAS, 492, 5592 [Kennicutt & Evans2012]Kennicutt2012 Kennicutt R. C., Evans N. J., 2012, ARA&A, 50, 531 [Kennicutt1998]Kennicutt1998 Kennicutt R. C., 1998, ApJ, 498, 541 [Kennicutt & De Los Reyes2021]Kennicutt2021 Kennicutt R. C., De Los Reyes M. A. C., 2021, ApJ, 908, 61 [Kewley, Jansen, & Geller2005]Kewley2005 Kewley L. J., Jansen R. A., Geller M. J., 2005, PASP, 117, 227 [Kong et al.2004]Kong2004 Kong X., Charlot S., Brinchmann J., Fall S. M., 2004, MNRAS, 349, 769 [Kouroumpatzakis et al.2023]Kouroumpatzakis2023 Kouroumpatzakis K., Zezas A., Kyritsis E., Salim S., Svoboda J., 2023, A&A, 673, A16 [Koyama et al.2019]Koyama2019 Koyama Y., Shimakawa R., Yamamura I., Kodama T., Hayashi M., 2019, PASJ, 71, 8 [Kreckel et al.2013]Kreckel2013 Kreckel K., et al., 2013, ApJ, 771, 62 [Kruijssen et al.2019]Kruijssen2019 Kruijssen J. M. D., Schruba A., Chevance M., Longmore S. N., Hygate A. P. S., Haydon D. T., McLeod A. F., et al., 2019, Natur, 569, 519[Krumholz & McKee2005]Krumholz2005 Krumholz M. R., McKee C. F., 2005, ApJ, 630, 250 [Lada & Dame2020]Lada2020 Lada C. J., Dame T. M., 2020, ApJ, 898, 3 [Lara-Lopez et al.2013]Lara-Lopez2013 Lara-Lopez M. A., et al., 2013, MNRAS, 433, L35 [Larson1981]Larson1981 Larson R. B., 1981, MNRAS, 194, 809 [Leja et al.2022]Leja2022 Leja J., et al., 2022, ApJ, 936, 165 [Leroy et al.2011]Leroy2011 Leroy A. K., et al., 2011, ApJ, 737, 12 [Leroy et al.2021]Leroy2021 Leroy A. K., et al., 2021, ApJS, 257, 43 [Li et al.2019]Li2019 Li H., Wuyts S., Lei H., Lin L., Lam M. I., Boquien M., Andrews B. H., Schneider D. P., 2019, ApJ, 872, 63 [Lisenfeld & Ferrara1998]Lisenfeld1998 Lisenfeld U., Ferrara A., 1998, ApJ, 496, 145 [Liu et al.2013]Liu2013 Liu G., et al., 2013, ApJL, 778, L41 [Lofthouse et al.2017]Lofthouse2017 Lofthouse E. K., Kaviraj S., Conselice C. J., Mortlock A., Hartley W., 2017, MNRAS, 465, 2895[Lombardi, Alves, & Lada2010]Lombardi2010 Lombardi M., Alves J., Lada C. J., 2010, A&A, 519, L7 [Lorenz et al.2023]Lorenz2023 Lorenz B., et al., 2023, ApJ, 951, 29 [Lu et al.2023]Lu2023 Lu J., Shen S., Yuan F.-T., Zeng Q., 2023, ApJL, 946, L7 [Lu et al.2022]Lu2022 Lu J., Shen S., Yuan F.-T., Shao Z., Hou J., Zheng X., 2022, ApJ, 938, 139 [Maiolino & Mannucci2019]Maiolino2019 Maiolino R., Mannucci F., 2019, A&ARv, 27, 3 [Mannucci et al.2010]Mannucci2010 Mannucci F., Cresci G., Maiolino R., Marconi A., Gnerucci A., 2010, MNRAS, 408, 2115 [Martin et al.2005]Martin2005 Martin D. C., et al., 2005, ApJL, 619, L59 [McKinnon et al.2018]McKinnon2018 McKinnon R., Vogelsberger M., Torrey P., Marinacci F., Kannan R., 2018, MNRAS, 478, 2851 [McLure et al.2018]McLure2018 McLure R. J., et al., 2018, MNRAS, 476, 3991 [Misiriotis et al.2000]Misiriotis2000 Misiriotis A., Kylafis N. D., Papamastorakis J., Xilouris E. M., 2000, A&A, 353, 117[Mosenkov et al.2019]Mosenkov2019 Mosenkov A. V., et al., 2019, A&A, 622, A132 [Mosenkov et al.2022]Mosenkov2022 Mosenkov A. V., et al., 2022, MNRAS, 515, 5698 [Mowla et al.2019]Mowla2019 Mowla L. A., et al., 2019, ApJ, 880, 57 [Naddaf & Czerny2022]Naddaf2022 Naddaf M. H., Czerny B., 2022, A&A, 663, A77[Nakajima et al.2023]Nakajima2023 Nakajima K., Ouchi M., Isobe Y., Harikane Y., Zhang Y., Ono Y., Umeda H., et al., 2023, ApJS, 269, 33[Narayanan et al.2012]Narayanan2012 Narayanan D., Krumholz M. R., Ostriker E. C., Hernquist L., 2012, MNRAS, 421, 3127 [Nedkova et al.2021]Nedkova2021 Nedkova K. V., et al., 2021, MNRAS, 506, 928 [Nelson et al.2013]Nelson2013 Nelson E. J., et al., 2013, ApJL, 763, L16 [Nersesian et al.2019]Nersesian2019 Nersesian A., et al., 2019, A&A, 624, A80 [Noeske et al.2007]Noeske2007 Noeske K. G., et al., 2007, ApJL, 660, L43 [Oey et al.2003]Oey2003 Oey M. S., Parker J. S., Mikles V. J., Zhang X., 2003, AJ, 126, 2317 [Ono et al.2023]Ono2023 Ono Y., et al., 2023, ApJ, 951, 72 [Pan et al.2021]Pan2021 Pan Z., Wang J., Zheng X., Kong X., 2021, ApJ, 922, 235 [Pettini & Pagel2004]Pettini2004 Pettini M., Pagel B. E. J., 2004, MNRAS, 348, L59 [Popesso et al.2023]Popesso2023 Popesso P., et al., 2023, MNRAS, 519, 1526 [Price et al.2014]Price2014 Price S. H., et al., 2014, ApJ, 788, 86 [Qin et al.2019a]Qin2019a Qin J., Zheng X. Z., Wuyts S., Pan Z., Ren J., 2019, MNRAS, 485, 5733 [Qin et al.2019b]Qin2019b Qin J., Zheng X. Z., Wuyts S., Pan Z., Ren J., 2019, ApJ, 886, 28 [Qin et al.2022]Qin2022 Qin J., Zheng X. Z., Fang M., Pan Z., Wuyts S., Shi Y., Peng Y., et al., 2022, MNRAS, 511, 765[Rieke et al.2009]Rieke2009 Rieke G. H., Alonso-Herrero A., Weiner B. J., Pérez-González P. G., Blaylock M., Donley J. L., Marcillac D., 2009, ApJ, 692, 556 [Rodighiero et al.2014]Rodighiero2014 Rodighiero G., et al., 2014, MNRAS, 443, 19 [Rosolowsky et al.2021]Rosolowsky2021 Rosolowsky E., et al., 2021, MNRAS, 502, 1218 [Rémy-Ruyer et al.2014]Remy-Ruyer2014 Rémy-Ruyer A., et al., 2014, A&A, 563, A31 [Salim et al.2016]Salim2016 Salim S., et al., 2016, ApJS, 227, 2 [Salim & Narayanan2020]Salim2020 Salim S., Narayanan D., 2020, ARA&A, 58, 529 [Sanders et al.2023]Sanders2023 Sanders R. L., et al., 2023, ApJ, 942, 24 [Sanders et al.2021]Sanders2021 Sanders R. L., et al., 2021, ApJ, 914, 19 [Sandstrom et al.2013]Sandstrom2013 Sandstrom K. M., et al., 2013, ApJ, 777, 5 [Santoro et al.2022]Santoro2022 Santoro F., et al., 2022, A&A, 658, A188 [Schreiber et al.2015]Schreiber2015 Schreiber C., et al., 2015, A&A, 575, A74 [Schreiber et al.2018]Schreiber2018 Schreiber C., Elbaz D., Pannella M., Ciesla L., Wang T., Franco M., 2018, A&A, 609, A30 [Shapley et al.2022]Shapley2022 Shapley A. E., et al., 2022, ApJ, 926, 145[Shapley et al.2023]Shapley2023 Shapley A. E., Sanders R. L., Reddy N. A., Topping M. W., Brammer G. B., 2023, ApJ, 954, 157[Shivaei et al.2020]Shivaei2020 Shivaei I., et al., 2020, ApJ, 899, 117 [Simard et al.2011]Simard2011 Simard L., Mendel J. T., Patton D. R., Ellison S. L., McConnachie A. W., 2011, ApJS, 196, 11 [Smith et al.2016]Smith2016 Smith M. W. L., et al., 2016, MNRAS, 462, 331 [Smith et al.2015]Smith2015 Smith R., Flynn C., Candlish G. N., Fellhauer M., Gibson B. K., 2015, MNRAS, 448, 2934[Speagle et al.2014]Speagle2014 Speagle J. S., Steinhardt C. L., Capak P. L., Silverman J. D., 2014, ApJS, 214, 15 [Suess et al.2019]Suess2019 Suess K. A., Kriek M., Price S. H., Barro G., 2019, ApJ, 877, 103 [Tadaki et al.2020]Tadaki2020 Tadaki K.-. ichi ., et al., 2020, ApJ, 901, 74 [Thompson et al.2015]Thompson2015 Thompson T. A., Fabian A. C., Quataert E., Murray N., 2015, MNRAS, 449, 147 [Thorne et al.2021]Thorne2021 Thorne J. E., et al., 2021, MNRAS, 505, 540 [Tremonti et al.2004]Tremonti2004 Tremonti C. A., et al., 2004, ApJ, 613, 898 [Tuffs et al.2004]Tuffs2004 Tuffs R. J., Popescu C. C., Völk H. J., Kylafis N. D., Dopita M. A., 2004, A&A, 419, 821 [van der Giessen et al.2022]vanderGiessen2022 van der Giessen S. A., Leslie S. K., Groves B., Hodge J. A., Popescu C. C., Sargent M. T., Schinnerer E., Tuffs R. J., 2022, A&A, 662, A26 [van der Wel et al.2014a]van der Wel2014a van der Wel A., et al., 2014, ApJL, 792, L6 [van der Wel et al.2014b]van der Wel2014b van der Wel A., et al., 2014, ApJ, 788, 28[van der Wel et al.2024]van der Wel2023 van der Wel A., Martorano M., Häußler B., Nedkova K. V., Miller T. B., Brammer G. B., van de Ven G., et al., 2024, ApJ, 960, 53[Whitaker et al.2012]Whitaker2012 Whitaker K. E., van Dokkum P. G., Brammer G., Franx M., 2012, ApJL, 754, L29 [Whitaker et al.2014]Whitaker2014 Whitaker K. E., et al., 2014, ApJ, 795, 104 [Whitaker et al.2017]Whitaker2017 Whitaker K. E., Pope A., Cybulski R., Casey C. M., Popping G., Yun M. S., 2017, ApJ, 850, 208 [Wild et al.2011]Wild2011 Wild V., Charlot S., Brinchmann J., Heckman T., Vince O., Pacifici C., Chevallard J., 2011, MNRAS, 417, 1760 [Wright et al.2010]Wright2010 Wright E. L., et al., 2010, AJ, 140, 1868 [Wuyts et al.2011]Wuyts2011 Wuyts S., et al., 2011, ApJ, 738, 106 [Wuyts et al.2012]Wuyts2012 Wuyts S., et al., 2012, ApJ, 753, 114 [Xiao et al.2012]Xiao2012 Xiao T., Wang T., Wang H., Zhou H., Lu H., Dong X., 2012, MNRAS, 421, 486 [Zahid et al.2012]Zahid2012 Zahid H. J., Dima G. I., Kewley L. J., Erb D. K., Davé R., 2012, ApJ, 757, 54 [Zahid et al.2013]Zahid2013 Zahid H. J., Yates R. M., Kewley L. J., Kudritzki R. P., 2013, ApJ, 763, 92 [Zahid et al.2014]Zahid2014 Zahid H. J., et al., 2014, ApJ, 792, 75 [Zahid et al.2017]Zahid2017 Zahid H. J., Kudritzki R.-P., Conroy C., Andrews B., Ho I.-T., 2017, ApJ, 847, 18 [Zhang et al.2023]Zhang2023 Zhang J., et al., 2023, MNRAS, 524, 4128 [Zheng et al.2009]Zheng2009 Zheng X. Z., et al., 2009, ApJ, 707, 1566 [Zhukovska2014]Zhukovska2014 Zhukovska S., 2014, A&A, 562, A76§ THE INFLUENCE OF THE DISC THICKNESS As mentioned in Section <ref>, the derivation of our geometry model equations is based on the assumption that galaxies are oriented face-on. We include the axial ratio to our model by interpreting it as a simple projection effect, i.e., an edge-on galaxy (axial ratio of b/a) with an observed (i.e., projected) dust surface density of Σ_ dust is in our model treated as a face-on galaxy with a dust surface density of Σ_ dust÷(b/a). This simplified approach ignores the fact that for an inclined observer the sightline to stars at any galactocentric radius will pierce through dust located at a range of galactocentric radii, an effect that will be more pronounced for thicker discs and/or closer to edge-on viewing angles. In other words, disc thickness may bias the projection effect. Does it affect our results?We start by considering the τ_ rh in Equation <ref>. At first, we assume a galaxy has a certain inclination θ with respect to the observer, defined such that θ = 0^∘ corresponds to a face-on viewing angle.We further treat the galaxy as an axi-symmetric system.Rewriting τ_ rh in Equation <ref> as τ_xyhθ, and introducing l as the integration variable capturing distance along the line of sight, we obtainτ_xyhθ =∫_l^∞κρ_0exp(-√(x^2+y^2)/R_ d-|h|/H_ d) dl =κΣ_ d^ diff/4 H_ d∫_z^∞exp(-√(x^2+(y+(h^'-h)tanθ)^2)/R_ d-|h^'|/H_ d) dh^'/cosθ ,where h'= h/H_ d and Σ_ d^ diff refers to the galaxy-averaged surface density of the diffuse dust disc. The expression for the effective optical depth due to diffuse dust (Equation <ref>) should likewise be rewritten as τ^ diff=-ln∫_-∞^∞ dx∫_-∞^∞dy ∫_-∞^∞1/4π R_⋆^2 H_⋆exp[-√(x^2+y^2)/R_⋆-|h|/H_⋆-τ_xyhθ] dh . We can see τ^ diff is a function of κ, Σ_ dust, R_⋆, H_⋆, R_ d, H_ d and θ. We notice that we mainly focus on the influence of relative thickness but do not care about the absolute value of the galaxy radius. Here we let R_ d=5 kpc. We have checked that using another value of radius does not alter our conclusion. According to the our best-fitting model, we let R̂=Ĥ≈ 0.5, i.e., R_⋆=2.5 kpc, H_⋆=H_ d/R_ d×2.5 kpc. The κ is fixed as 0.68× 10^-5 M_⊙^-1kpc^2 in the V-band following Section <ref>. Now, the optical depth of diffuse dust τ_ diff depends on the face-on Σ_ dust, thickness H_ d/R_ d, and inclination θ.We integrate the equations numerically to obtain the dust opacity at different inclinations. Besides, we project the three-dimensional galaxy disc onto a plane with a viewing angle of θ for a set of different thicknesses. We then use the python package photutils[<https://photutils.readthedocs.io/en/stable/index.html>] to fit the projected ellipse to obtain the axial ratio.The top-left panel of Fig. <ref> shows IRX as a function Σ_ dust for a disc-like galaxy with a thickness of 0.1. We can see that at a given Σ_ dust, more inclined galaxies have higher IRX. If the thickness increases to 1 (H_ d=R_ d, the top-right panel), the shape of the galaxy is close to a sphere, and therefore the dust opacity becomes no longer sensitive to the changes of inclination. The bottom panels show the IRX as a function of projected dust surface density, determined by dividing Σ_ dust by b/a. Surprisingly, at a given Σ_ dust,proj, the dependence of IRX on inclination disappears for different thicknesses. Although our derivation of the model formulas is based on the face-on galaxy orientation, including the axial ratio via a simple projection approach enables our results not to be affected.§ DETERMINING THE NUMERICAL SOLUTION R̂^TOT(R̂, F_BC)We here derive the relation between the size ratio of SFR and dust disc when accounting for all dust (R̂^ tot) or only diffuse dust (R̂).As we will demonstrate, the relation between the two varies with changing birth cloud dominance (F_ bc). Again, we use no superscript, superscript bc, and superscript tot to denote the diffuse dust, BC dust and total dust components, respectively. We assume both BC and dust disc follow exponential profiles as ρ_ d^bc(r) =F_ bc/R_ d^ bcexp(-r/R_ d^ bc) , ρ_ d(r) =1-F_ bc/R_ dexp(-r/R_ d),and the integrated mass of BC and diffuse dust are F_ bc and 1-F_ bc, respectively (i.e., for simplicity we use a total dust mass of unity in our derivation). Then the total dust (BC+diff) profile is ρ_ d^ tot(r)=F_ bc/R_ d^ bcexp(-r/R_ d^ bc)+1-F_ bc/R_ dexp(-r/R_ d) .Approximating the total profile as an exponential disc with ρ^ tot(r)=1/R_ d^ totexp(-r/R_ d^ tot) (total dust mass is unity), we then have 1/R_ d^ totexp(-r/R_ d^ tot)=F_ bc/R_ d^bcexp(-r/R_ d^bc)+1-F_ bc/R_ dexp(-r/R_ d) . Multiplying both sides of the Equation <ref> by R_ d and lettingr^'=r/R_ d, then Equation <ref> can be rewritten as , R_ d/R_ d^ totexp(-r^'R_ d/R_ d^ tot)=F_ bcR_ d/R_ d^ bcexp(-r^'R_ d/R_ d^ bc) +(1-F_ bc) exp(-r^')Given that the BC dust disc is associated with the UV-emitting stellar disc, we can therefore write R̂=R_⋆/R_ d=R_ d^bc/R_ d and R̂^ tot=R_ d^bc/R_ d^ tot. This allows Equation <ref> to be rewritten as1/R̂^ totexp(-r^'/ R̂^ tot)=F_ bc/R̂^ totexp(-r^'/R̂^ tot)+(1-F_ bc)exp(-r^').For a given F_ bc and R̂, we thus have a profile for the total dust disc and the R̂^ tot can be obtained by fitting the profile (within r^'<3). In other words, R̂^ tot is a function of R̂ and F_ bc. We show the numerically derived R̂^ tot as a function of R̂ at different F_ bc in Fig. <ref>. We can see that R̂^ tot increases with R̂ and decreases with F_ bc. When F_ bc=1, i.e., there is no diffuse dust, R̂^ tot=1; When F_ bc=0, i.e., there is no BC dust, R̂^ tot=R̂. It is worthwhile noting that the total dust disc can be well described by an exponential profile in most cases. In some extreme cases, e.g. at R̂<0.3, it slightly deviates from an exponential profile. This will not affect our results since our model fitting suggests the R̂≈0.5. § CALIBRATING THE TOTAL INFRARED LUMINOSITY FROM WISE MID-IR BANDS We calibrate the total infrared luminosity by making use of the high-quality data from the DustPedia survey.[<http://www.dustpedia.astro.noa.gr/>] The total IR luminosities are taken from <cit.>, in which CIGALE was adopted to perform the energy balance fitting and the THEMIS model was used to derive the dust properties. Galaxies that have large errors on either stellar mass, SFR or IR luminosity (i.e., >0.3 dex) were excluded from our analysis. Our final sample contains 324 SFGs with secure detections in multiple bands.To better sample the IR SED range and obtain reliable L_ IR measurements, we require detections with a signal-to-noise ratio S/N>3 in at least three Herschel bands (70–500 μm). We give a new calibration of total infrared luminosity [L_ IR (8–1000 μm)] as a function of the combination of WISE 12 μm and 22 μm luminosities, log L_ IR=1.19 + 0.97 × ( log L_22 + 0.7 ( log L_12 - log L_22 ) ) , where L_ 12 and L_ 22 are the monochromatic luminosities given by L_12=ν L_ν(12 μ m) and L_22=ν L_ν(22 μ m)in units of Solar luminosity.Fig. <ref> demonstrates that the scatter significantly decreases if two mid-IR bands are used to constrain the total IR luminosity. | http://arxiv.org/abs/2312.16700v1 | {
"authors": [
"J. Qin",
"X. Z. Zheng",
"S. Wuyts",
"Z. Lv",
"M. Qiao",
"J. -S. Huang",
"F. S. Liu",
"A. Katsianis",
"V. Gonzalez",
"F. Bian",
"H. Xu",
"Z. Pan",
"W. Liu",
"Q. -H. Tan",
"F. X. An",
"D. D. Shi",
"Y. Zhang",
"R. Wen",
"S. Liu",
"C. Yang"
],
"categories": [
"astro-ph.GA",
"astro-ph.CO"
],
"primary_category": "astro-ph.GA",
"published": "20231227194156",
"title": "Understanding the Universal Dust Attenuation Scaling Relation of Star-Forming Galaxies"
} |
Multinomial Link Models Tianmeng Wang^1, Liping Tong^2, and Jie Yang^1 ^1University of Illinois at Chicago and ^2Advocate Aurora HealthJanuary 14, 2024 =======================================================================================================================Few-shot segmentation aims to accurately segment novel target objects within query images using only a limited number of annotated support images.The recent works exploit support background as well as its foreground to precisely compute the dense correlations between query and support. However, they overlook the characteristics of the background that generally contains various types of objects. In this paper, we highlight this characteristic of background which can bring problematic cases as follows: (1) when the query and support backgrounds are dissimilar and (2) when objects in the support background are similar to the target object in the query. Without any consideration of the above cases, adopting the entire support background leads to a misprediction of the query foreground as background. To address this issue, we propose Task-disruptive Background Suppression (TBS), a module to suppress those disruptive support background features based on two spatial-wise scores: query-relevant and target-relevant scores. The former aims to mitigate the impact of unshared features solely existing in the support background, while the latter aims to reduce the influence of target-similar support background features. Based on these two scores, we define a query background relevant score that captures the similarity between the backgrounds of the query and the support, and utilize it to scale support background features to adaptively restrict the impact of disruptive support backgrounds. Our proposed method achieves state-of-the-art performance on PASCAL-5^i and COCO-20^i datasets on 1-shot segmentation. Our official code is available at github.com/SuhoPark0706/TBSNet. § INTRODUCTIONWith the advance of deep learning, semantic segmentation <cit.> has achieved remarkable performance. However, its performance relies on abundant data for target classes and degrades noticeably with insufficient data.To resolve this, Few-Shot Segmentation (FSS) has been proposed <cit.> to build a model adaptable for novel classes with only a few number of labeled data. Briefly, FSS aims to learn a novel class with a small number of labeled images, called a support set, to segment an unlabeled image, called a query set.In Few-Shot Segmentation, affinity learning is a mainstream technique <cit.>, learning pixel-wise correlations.Early methods <cit.> utilize only the support foreground (SF) features to compute the correlations with the query features (Q). However, they overlook that the support background (SB) features also contain contextual information helpful for distinguishing between the query foreground (QF) and background (QB). For instance, the sky can be particularly useful to segment airplanes. Therefore, the most recent techniques <cit.> leverage the entire set of support features (S), encompassing both foreground (SF) and background (SB).However, it is important to note that not every SB feature is beneficial for distinguishing between the QF and QB. Since a background can contain diverse objects, the dissimilarity between the QB and SB can disrupt the segmentation.Moreover, the SB may include objects similar to the target object in the query image.In such cases, the high similarity between the SB and QF may mislead the model to predict QF pixels as QB. Therefore, we need to filter out these harmful SB pixels.In this context, we propose Task-disruptive Background Suppression (TBS), a module designed to suppress harmful background features within a support set to enhance the segmentation of the query. Specifically, to determine the utility of background features, TBS defines two spatial-wise scores: query-relevant and target-relevant scores.The query-relevant score determines the similarity between each SB feature and the entire Q features, calculated using the cross-attention module <cit.>. This score allows us to identify SB pixels relevant to the Q image while filtering out irrelevant ones. However, SB features similar to the QF may still persist, potentially leading to the misprediction of the QF mask. To address it, we introduce the target-relevant score for each SB feature, indicating the degree of similarity with the target object features in the support. Similar to the query-relevant score, the target-relevant score is derived through the cross-attention between each SB feature and SF feature. By combining these two scores, we define a query background-relevant score; the relevance score of SB features to QB. As a result, we suppress task-disruptive SB features that are irrelevant to a given QF or similar to the target object class. This is achieved by multiplying these scores with SB features. To sum up, our contributions are summarized as follows: * For the first time, we define the advantageousness of support background features based on the relation between background regions of query and support images. * We propose a novel module,Task-disruptive Background Suppression (TBS) that restricts certain background features within the support set that do not contribute to precise segmentation of the query image.* Our method achieves state-of-the-art performance over baselines, and its effectiveness is validated by various ablation studies. § RELATED WORK §.§ Few-Shot SegmentationThe methods of Few-Shot Segmentation (FSS) can be categorized into two main streams: prototype-based and affinity learning methods. Prototype-based methods <cit.> represent the foreground objects in the support set as single or multiple prototypes. They classify the pixels of the query image into foreground and background based on their similarity to the prototypes. However, these methods may lead to deteriorated segmentation results, since they lose information about support objects while summarizing the images with only a few representative features.On the other hand, affinity learning methods <cit.> leverage pixel-level dense correlations between the object features of support set and query features. Moreover, recent techniques <cit.> have found that the background features of the support set are also useful in distinguishing between the foreground and background of the query. Therefore, they compute correlations with query features using not only the object features but also the background features. Although they show impressive performance by exploiting whole support features, they still overlook that some background features are not useful for classifying the foreground and background of the query image. On the other hand, our method assigns low weights to those task-disruptive background features to prevent the pixel of the query image from being misclassified.§.§ Feature Suppression in FSSThe cross-attention module in few-shot segmentation is generally employed to constrain the disruptive support features in distinguishing between the query foreground and background <cit.>. For example, CyCTR <cit.> utilizes bidirectional cross-attention to identify the most similar support feature for each query feature and vice versa.Consequently, when the category of a support feature (the starting feature) differs from the class of the most similar support feature (the ending feature) for the corresponding query feature, the starting feature is identified as potentially disruptive and is mitigated. similary, ABCNet <cit.> suppresses unuseful support features by indirectly comparing the support and query features through computing cross-attention with a reference pool. However, these methods may struggle to demonstrate effectiveness especially when the support background is similar to the query foreground. This limitation arises because they do not consider the categorically derived relationship between features. Although our method also restrains disruptive support features like existing methods based on cross-attention, we utilize the relationship between the support foreground and background for the first time. By incorporating this novel relationship in addition to the relation between the support and query, we can discover task-disruptive support features with consideration of only the similarity between the support background and the query background. § OUR METHODThe overall architecture of our method is illustrated in <ref>. Affinity learning methods in FSS require precise pairing between the query and support features, especially based on their binary class (foreground and background). However, as noted in the Introduction, backgrounds of different images typically consist of various objects and may not share common characteristics, in contrast to the foregrounds. Thus, in this paper, we aim to refine support background features by considering the following two conditions. First, the support background features similar to the query background features are preferred.Another is that the support background features should be distinguished from the query object features. In other words, these two conditions intend to reduce the gap between backgrounds of support and query, and enhance the disparity between background and foreground simultaneously, encouraging the query and support features to be well-clustered according to their binary class.To meet these requirements, we introduce two representativeness scores: query-relevant score R^Q and target-relevant score R^T, which are pixel-level importance scores for support background features. Specifically, R^Q signifies whether the support background features can describe the query features.Thus, the support background features well represented by the query features would have high R^Q. However, since the query features used in computing R^Q contain both the query background and foreground, it cannot be guaranteed that R^Q is derived only from the background. To resolve this, we define R^T which is highly activated when the support background features are similar to the target object features in the support set. By subtracting R^T from R^Q, we filter out the scores activated by the query target object at R^Q.We define the filtered scores as background-relevant score R^B which restrains task-disruptive support background features containing the information of the target object. We utilize R^B to selectively suppress the influence of the support background features.In the following sections, we provide the problem definition and a detailed explanation of the aforementioned scores for refining the support background features.§.§ Problem DefinitionAs a standard formulation of the few-shot segmentation problem, we have two disjoint datasets: D_train for training a model and D_test for evaluating a learned model. Each dataset consists of distinct object classes C_train and C_test without any overlap (C_train∩ C_test=∅). Generally, the training and testing of few-shot segmentation are composed of several episodes. Each episode consists of K labeled images and an unlabeled image, i.e., K-shot episode, where all images contain objects of the same category c randomly sampled from the dataset. Specifically, the labeled images are called a support set S={(I^S_j,M^S_j)}^K_j=1, and the unlabeled image is named a query set Q=(I^Q,M^Q).Here, I is an image and M denotes a corresponding ground-truth binary mask, containing a value of 1 for foreground object regions, belonging to category c, and 0 for the others. The goal of the few-shot segmentation is to predict M^Q based on S. §.§ Spatial-wise Representativeness ScoresFor features of the j-th support image F^S_j computed by a feature extractor, we spatially divide F^S_j into support object features F^O_j and support background features F^B_j by ground-truth mask M^S_j, as follows:F^O_j ={f^S_j,h,w|m^S_j,h,w=1}F^B_j ={f^S_j,h,w|m^S_j,h,w1},where f^S_j,h,w and m^S_j,h,w denote the value spatially located at (h,w) of F^S_j and M^S_j, respectively. Among these two sorts of features, F^O_j can be treated as valuable features for the task, since they only contains the categorical information of target objects that we need to detect in the query set Q. In contrast, F^B_j may include both useful and unuseful features, so thus it is crucial to prevent the unuseful features within F^B_j from being utilized in the segmentation process. Therefore, we introduce a query-relevant score R^Q of how much each pixel well represents query-relevant information.To measure the relevance of each pixel, we utilize the cross-attention module <cit.>. However, unlike the existing cross-attention, we reconstruct the support features F^S_j based on the query features F^Q. Specifically, reconstructed support features F̅^S_j are computed as follows:F̅^S_j=𝕍(F^Q) Softmax(ℚ(F^S_j) 𝕂(F^Q)^T/√(d)),where d is the channel dimension of projection, and ℚ, 𝕂, and 𝕍 are linear heads for query, key, and value, respectively.Since the reconstruction quality of F^S_j would be high when the support feature is similar to the query features, we define the similarity between F̅^S_j and 𝕍(F^S_j) with additional linear projection as the R^Q_j which represents query-relevance score of j-th support feature F^S_j, as follows:R^Q_j=CS(L(F̅^S_j), L(𝕍(F^S_j))),where CS(·,·) is the cosine-similarity function, and L is a linear projection layer. Although we can estimate how much each pixel of support features is relevant to query features based on R^Q, it is not solely derived from its similarity with the query background. In this case, the query-relevance score alone can be problematic, since the high similarity between the support background and query object induces the pixels of the query objects to be predicted as the background. To resolve this issue, we define a complementary score, a target-relevant score R^T, which indicates whether each pixel is similar to the target object. We first compute reconstructed support features F̂^S_j with the support object features F^O_j, as follows:F̂^S_j=𝕍(F^O_j) Softmax(ℚ(F^S_j) 𝕂(F^O_j)^T/√(d)).Then, R^T_j is determined the similarity between F̂^S_j and 𝕍(F^O_j), as follows:R^T_j=CS(L(F̂^S_j), L(𝕍(F^S_j))). Note that, the parameters of projection heads such as L, ℚ, 𝕂, and 𝕍 are shared with Eq. <ref> §.§ Task-disruptive Background SuppressionWe first define a query background-relevant score R^B through the subtraction of R^T from R^Q and multiply it with a support background mask 1 - M^S element-wisely. Since the subtraction operation signifies the removal of target object relevance from query relevance, the score R^B reflects the similarity of each pixel in the support background features to the query background features.We convert this score into spatial-wise weights to suppress disruptive support background features as follows:R̃^B_j=b(R^B_j),where b is a shallow convolution block for refinement and the architecture of it is explained in <ref>.Recall that our goal is to refine the support background features to resolve an undesirable background matching problem.However, directly multiplying the score map R̃^B_j with the support feature F^S_j would result in the incomplete preservation of foreground features. To prevent this, we replace the value of R̃^B_j corresponding to M^O_j with 1, denoting it as Ṙ^B_j. Consequently, we obtain adaptive support features A^S_j by multiplying Ṙ^B_j with F^S_j. We then utilize A^S_j in the subsequent segmentation process instead of F^S_j. § EXPERIMENTS §.§ Datasets and Evaluation MetricsWe utilize PASCAL-5^i <cit.> and COCO-20^i<cit.> following the prior works <cit.>. PASCAL-5^i combines data from PASCAL VOC 2012 <cit.> and SDS <cit.>, comprising 20 categories. In contrast, COCO-20^i is a subset of COCO <cit.> and is comprised of 80 categories. And, each dataset is divided into four folds where it has the same number of categories that do not overlap with others. Hence, each fold of PASCAL-5^i and COCO-20^i have 5 and 20 classes, respectively. To evaluate the model's adaptability to novel classes, we adopt a cross-validation scheme where each fold is selected as D_test and others are used as D_train. Then, we evaluate the model with mean intersection over union (mIoU) and foreground-background intersection over union (FB-IoU) for 1000 episodes randomly sampled from D_test.§.§ Implementation DetailsTo verify the high adaptability of TBS, we apply it to two baseline models: DCAMA <cit.> and CyCTR <cit.>. For fair comparisons with baselines, we adopt ResNet-101 pretrained on ImageNet and Swin-Transformer pre-trained on ImageNet 1K as a feature extractor. In the case of DCAMA with Swin-Transformer, we apply TBS at scales of 1/8, 1/16, and 1/32 to align with the scale used in DCAMA's cross-attention mechanism. However, for DCAMA with ResNet-101, we utilize TBS only in 1/16 and 1/32 scales due to memory limitation. On the other hand, since CyCTR was verified only on ResNet, we conducted experiments on ResNet-101, not Swin-Transformer. Unlike DCAMA which adopts multi-level features, CyCTR utilizes single-level features generated by combining features from 3- and 4-th blocks. Therefore, we suppress only those combined features by using TBS. Many hyper-parameters, i.e., optimizer, learning rate, batch size, etc., are the same as the baseline.We describe the flow of the convolution block for converting the query background-relevant score into the spatial-wise weight.It first concatenates the input score map and the layer normalization of the score map to reference the spatial-wise distribution of scores. Subsequently, two consecutive layers of 1×1 convolution without any non-linear activation are employed. This projects the input score maps into 256 channels and squeezes it into 1 channel again. After that, a sigmoid function is applied to make the score map into the range between 0 and 1.§.§ Experimental ResultsQuantitative Results. We evaluate our proposed method by comparing it with previous techniques designed for few-shot segmentation.As illustrated in <ref>, recent affinity learning models, specifically CyCTR and DCAMA, already exhibit comparable performances.Upon incorporating TBS into these approaches, a consistent improvement over baseline model performances is observed, resulting in the state-of-the-art scores.This improvement remains consistent across various evaluation metrics and different quantities of labeled images on the PASCAL-5^i dataset. Similar trends are observed in the 1-shot scenario of COCO-20^i.As demonstrated in <ref>, TBS consistently enhances DCAMA's performance across all folds, providing the best performance.While its impact is less pronounced in the 5-shot scenario compared to the 1-shot scenario, where it shows substantial effectiveness, TBS still succeeds in improving the average mIoU of DCAMA.As a result, TBS surpasses the existing state-of-the-art performance in three out of four quantitative benchmark scenarios in the context of few-shot segmentation. This verifies the effectiveness of suppressing disruptive support, particularly in situations of extreme data scarcity. Qualitative Results. In addition to the quantitative results, we report qualitative results to intuitively show the effectiveness of TBS. Compared with DCAMA, our results include fewer mispredicted pixels regardless of the number of support images as shown in <ref>. Especially, when objects in the support background are not present in the query background, our model outperforms DCAMA. This validates that our method appropriately suppresses unnecessary support background. Additional in-depth analysis of it is provided in <ref>.§ FURTHER ANALYSISIn this section, we conduct ablation studies and provide an in-depth analysis of our method.For most ablation studies, we use the PASCAL-5^i dataset in the 1-shot scenario with Swin-B as the backbone network, except for <ref>. Additionally, mIoU is adopted for metric which is one of the most standard metrics in few-shot segmentation.§.§ Ablation Study on Main Components<ref> presents the effects of query- and target-relevant scores in fold-1.In the second row, the query-relevant score reduces mIoU by 0.4%.We argue that the query-relevant score can inherently possess a negative effect by emphasizing some of the support backgrounds similar to the target object. Therefore, suppressing the support background with only the query-relevant score might be harmful to performance. Thus, the query-relevant score should be used in conjunction with the target-relevant score.On the contrary, as shown in the third row, the target-relevant score enhances the baseline by 0.3%.It means that some support background similar to the target object resembles the query foreground, making feature suppression advantageous.Importantly, adopting both scores can further boost performance, affirming their complementary nature. §.§ Varying K for K-shot We validated the merits of our method using standard evaluation protocol with 1 and 5 labeled images (i.e., K=1 and 5). We also conducted experiments with varying numbers of labeled images and the results are provided in <ref>. As reported, the benefits of our method are especially highlighted at low shots in terms of relative performance improvements.It confirms that our method is notably more effective in scenarios with severe data scarcity, aligning well with the requirements of few-shot learning. §.§ Quality of Feature Matching Following our motivation, TBS suppresses the task-disruptive features, enhancing the similarity between query and support foreground (QF and SF), and also between query and support background (QB and SB). To analyze these changes in similarity caused by TBS, we examine the cross-attention map in the segmentation model (DCAMA).Specifically, we average the attention scores corresponding to the QF & SF pairs to capture their similarity, and perform the same computation for the QB & SB pairs as well. As shown in <ref>, we observe that the proposed method achieves a higher averaged attention score compared to the baseline. This observation implies that ours successfully improves the similarity between foreground objects. However, for the attention score of background pairs, ours achieve lower score compared to baseline. We suspect that unintended suppression of useful background occurs, resulting in lower attention scores of background pairs. Nevertheless, we verify the improvement when averaging the scores of both pairs and highlight the significant enhancement in attention scores of foreground pairs. §.§ Visualization of Background-relevant Score We visualize background-relevant scores under diverse conditions to verify TBS in mitigating task-disruptive support background regions, as shown in <ref>. In the first scenario where objects do not exist in support background, as demonstrated in <ref> (a), TBS only restrains the support object boundaries that are treated as background. The next scenario is when objects within the support background are not present in the query background. As shown in <ref> (b), TBS assigns low scores to these regions, demonstrating the impact of query-relevant scoring. In the last scenario, shared objects are present in both the support and query backgrounds. These shared objects act as helpful features since they enhance the similarity between query and support backgrounds. As depicted in <ref> (c), we verify that TBS grants high scores to leverage these features for segmentation. More importantly, TBS exhibits its effectiveness even in scenarios where relevant and irrelevant objects coexist within the support background (last row of <ref> (c)). In such a case, we expect that the segmentation model may exploit only the relevant objects while suppressing the irrelevant ones to enhance background similarity. As shown in the last row of <ref> (c), although the support backgrounds contain both people and a dog, only the dog is suppressed, as people are present in the query image while the dog is not. To sum up, TBS effectively suppresses task-disruptive support backgrounds in various conditions.Additionally, we compare background-relevant scores before and after convolution block to analyze the effectiveness of the refinement module. Notably, scores before refinement tend to have small variations between background-relevant and -irrelevant objects, potentially hindering effective suppression due to a subtle difference in suppression power. In contrast, the refined score maps present a wider spectrum of magnitudes, decisively influencing the determination of whether pixels in the support background warrant suppression. Furthermore, without the refinement module, the score distribution across the query-support image pairs exhibits significant diversity. This disparity is evident when comparing the background score distributions from <ref> (a) and (c). Conversely, the distribution of scores after refinement attains consistency across the query-support pairs, enhancing their stability as input for the segmentation model. § CONCLUSIONIn this paper, we introduce a Task-Disruptive Background Suppression module designed to mitigate the problem that query-irrelevant and target-similar features in support background regions.To suppress such features, we present two types of scores.First, a query-relevant score is employed to filter out irrelevant support background pixels whose similarity between query images is low.Second, a target-relevant score is used for detecting support backgrounds that are similar to the support foreground. Based on these two score maps, we could suppress task-disruptive backgrounds in the support set. Finally, experiments conducted on standard benchmarks show the effectiveness of our model.§ ACKNOWLEDGMENTSThis work was supported in part by MSIT/IITP (No. 2022-0-00680, 2019-0-00421, 2020-0-01821, 2021-0-02068), and MSIT&KNPA/KIPoT (Police Lab 2.0, No. 210121M06). | http://arxiv.org/abs/2312.15894v1 | {
"authors": [
"Suho Park",
"SuBeen Lee",
"Sangeek Hyun",
"Hyun Seok Seong",
"Jae-Pil Heo"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231226055536",
"title": "Task-Disruptive Background Suppression for Few-Shot Segmentation"
} |
[ Event-based Shape from Polarization with Spiking Neural Networks Peng Kang ^1,*, Srutarshi Banerjee^2, Henry Chopp^3, Aggelos Katsaggelos^3, and Oliver Cossairt^1 January 14, 2024 =====================================================================================================1Shanghai Jiao Tong University, 2Xiaohongshu Inc.,3Beijing University Of Posts And Telecommunications, 4Carnegie Mellon University, 5ShanghaiTech University][1]Work done during internship at Xiaohongshu Inc. [2]Equally contributed. [3]Corresponding author.Recent advancements in subject-driven image generation have led to zero-shot generation, yet precise selection and focus on crucial subject representations remain challenging. Addressing this, we introduce the SSR-Encoder, a novel architecture designed for selectively capturing any subject from single or multiple reference images. It responds to various query modalities including text and masks, without necessitating test-time fine-tuning. The SSR-Encoder combines a Token-to-Patch Aligner that aligns query inputs with image patches and a Detail-Preserving Subject Encoder for extracting and preserving fine features of the subjects, thereby generating subject embeddings. These embeddings, used in conjunction with original text embeddings, condition the generation process. Characterized by its model generalizability and efficiency, the SSR-Encoder adapts to a range of custom models and control modules. Enhanced by the Embedding Consistency Regularization Loss for improved training, our extensive experiments demonstrate its effectiveness in versatile and high-quality image generation, indicating its broad applicability. Project page: <ssr-encoder.github.io>§ INTRODUCTIONRecent advancements in image generation, especially with the advent of text-to-image diffusion models trained on extensive datasets, have revolutionized this field. A prime example is Stable Diffusion, an open-source model cited as <cit.>, which allows a broad user base to easily generate images from textual prompts. A growing area of interest that has emerged is the subject-driven generation, where the focus shifts from creating a generic subject, like “a cat” to generating a specific instance, such as “the cat”. However, crafting the perfect text prompt to generate the desired subject content poses a significant challenge. Consequently, researchers are exploring various strategies for effective subject-driven generation.Subject-driven image generation aims to learn subjects from reference images and generate images aligning with specific concepts like identity and style. Currently, one prominent approach involves test-time fine-tuning <cit.>, which, while efficient, requires substantial computational resources to learn each new subject. Another approach <cit.> encodes the reference image into an image embedding to bypass the fine-tuning cost. However, these encoder-based models typically require joint training with the base diffusion model, limiting their generalizability. A concurrent work, IP-adapter <cit.>, tackles both fine-tuning costs and generalizability by learning a projection to inject image information into the U-Net, avoiding the need to fine-tune the base text-to-image model, thereby broadening its application in personalized models.Despite these advancements, a critical aspect often overlooked is the extraction of the most informative representation of a subject. With images being a complex mixture of subjects, backgrounds, and styles, it's vital to focus on the most crucial elements to represent a subject effectively. To address this, we introduce the SSR-Encoder, an image encoder that generates Selective Subject Representations for subject-driven image generation. Our SSR-Encoder firstly aligns patch-level visual embeddings with texts in a learnable manner, capturing detailed subject embeddings guided by token-to-patch attention maps. Furthermore, we propose subject-conditioned generation, which utilizes trainable copies of cross-attention layers to inject multi-scale subject information. A novel Embedding Consistency Regularization Loss is proposed to enhance the alignment between text queries and visual representations in our subject embedding space during training. This approach not only ensures effective token-to-patch alignment but also allows for flexible subject selection through text and mask queries during inference. Our SSR-Encoder can be seamlessly integrated into any customized stable diffusion models without extensive fine-tuning. Moreover, the SSR-Encoder is adaptable for controllable generation with various additional controls, as illustrated in Fig. <ref>.We summarize our main contributions as follows: * We propose a novel framework, termed as SSR-Encoder, for selective subject-driven image generation. It allows selective single- or multiple-subject generation, fully compatible with ControlNets (e.g. canny, OpenPose, etc.), and customized stable diffusion models without extra test-time training. * Token-to-Patch Aligner and Detail-Preserved Subject Encoder are proposed in our SSR-Encoder to learn selective subject embedding. We also present an Embedding Consistency Regularization Loss to enhance token-to-patch text-image alignment in the subject embedding space. * Our extensive experiments have validated the robustness and flexibility of our approach, showcasing its capability to deliver state-of-the-art (SOTA) results among finetuning-free methods. Impressively, it also demonstrates competitive performance when compared with finetuning-based methods. § RELATED WORKText-to-image diffusion models. In recent years, text-to-image diffusion models <cit.> has made remarkable progress, particularly with the advent of diffusion models, which have propelled text-to-image generation to large-scale commercialization. DALLE <cit.> first achieved stunning image generation results using an autoregressive model. Subsequently, DALLE2 <cit.> employed a diffusion model as the generative model, further enhancing text-to-image synthesis ability. Imagen <cit.> and Stable Diffusion <cit.> trained diffusion models on larger datasets, further advancing the development of diffusion models and becoming the mainstream for image generation large models. DeepFloyd IF <cit.> utilized a triple-cascade diffusion model, significantly enhancing the text-to-image generation capability, and even generating correct fonts. Stable Diffusion XL <cit.>, a two-stage cascade diffusion model, is the latest optimized version of stable diffusion, greatly improving the generation of high-frequency details, small object features, and overall image color. Controllable image generation.Current diffusion models can incorporate additional modules, enabling image generation guided by multimodal image information such as edges, depth maps, and segmentation maps. These multimodal inputs significantly enhance the controllability of the diffusion model's image generation process. Methods like ControlNet <cit.> utilize a duplicate U-Net structure with trainable parameters while keeping the original U-Net parameters static, facilitating controllable generation with other modal information. T2I-adapter <cit.> employs a lightweight adapter for controlling layout and style using different modal images. Uni-ControlNet <cit.> differentiates between local and global control conditions, employing separate modules for injecting these control inputs. Paint by Example <cit.> allows for specific region editing based on reference images. Other methods <cit.> manipulate the attention layer in the diffusion model's denoising U-Net to direct the generation process. P2P <cit.> and Null Text Inversion <cit.> adjust cross-attention maps to preserve image layout under varying text prompts.Subject-driven image generation. Subject-driven image generation methods generally fall into two categories: those requiring test-time finetuning and those that do not. The differences in characteristics of these methods are illustrated in Table <ref>. Test-time finetuning methods <cit.> often optimize additional text embeddings or directly fine-tune the model to fit the desired subject. For instance, Textual Inversion <cit.> optimizes additional text embeddings, whereas DreamBooth <cit.> adjusts the entire U-Net in the diffusion model. Other methods like Customdiffusion <cit.> and SVDiff <cit.> minimize the parameters needing finetuning, reducing computational demands. Finetuning-free methods <cit.> typically train an additional structure to encode the reference image into embeddings or image prompts without additional finetuning. ELITE <cit.> proposes global and local mapping training schemes to generate subject-driven images but lack fidelity. Instantbooth <cit.> proposes an adapter structure inserted in the U-Net and trained on domain-specific data to achieve domain-specific subject-driven image generation without finetuning. IP-adapter <cit.> encodes images into prompts for subject-driven generation. BLIP-Diffusion <cit.> enables efficient finetuning or zero-shot setups. However, many of these methods either utilize all information from a single image, leading to ambiguous subject representation, or require finetuning, limiting generalizability and increasing time consumption. In contrast, our SSR-Encoder is both generalizable and efficient, guiding any customized diffusion model to generate images based on the representations selected by query inputs without any test-time finetuning.§ THE PROPOSED METHOD Selective subject-driven image generation aims to generate target subjects in a reference image with high fidelity and creative editability, guided by the user's specific queries (text or mask). To tackle this, we propose our SSR-Encoder, a specialized framework designed to integrate with any custom diffusion model without necessitating test-time fine-tuning.Formally, for a given reference image 𝐼 and a user query 𝑞, the SSR-Encoder effectively captures subject-specific information and generates multi-scale subject embeddings 𝑐_𝑠. These multi-scale subject embeddings 𝑐_𝑠 are subsequently integrated into the U-Net model with trainable copies of cross-attention layers. The generation process, conditioned on both subject embeddings c_s and text embedding c_t, allows for the production of desired subjects with high fidelity and creative editability. The overall methodology is illustrated in Fig. <ref>.In general, SSR-Encoder is built on text-to-image diffusion models <cit.>[Reviewed in the Supplementary.]. It comprises two key components: the token-to-patch aligner and detail-preserving subject encoder (Sec. <ref>). The subject-conditioned generation process is detailed in Sec. <ref>. Lastly, training strategies and loss functions are presented in Sec. <ref>.§.§ Selective Subject Representation Encoder Our Selective Subject Representation Encoder (SSR-Encoder) is composed of two integral parts: Token-to-Patch Aligner and Detail-Preserving Subject Encoder. The details of each component are as follows. Token-to-patch aligner. Several works <cit.> have pointed out that CLIP tends to prioritize background regions over foreground subjects when identifying target categories. Therefore, relying solely on text-image similarity may not adequately capture subject-specific information. To address this issue, we propose the Token-to-Patch (T2P) Aligner, which implements two trainable linear projections to align image patch features with given text token features. Mathematically, given a query text-image pair (𝑞, 𝐼), we employ pre-trained CLIP encoders to generate the text query and image reference into query embedding 𝑧_𝑞∈ℝ^N_q × D_q and semantic visual embedding 𝑧_0∈ℝ^N_i × D_i from the last CLIP layer, respectively, where N_(·) and D_(·) represent the number of tokens and dimensions for query and image features respectively. We then use the trainable projection layers 𝐖^𝐐 and 𝐖^𝐊 to transform them into a well-aligned space. The alignment is illustrated as follows: 𝑄 =𝐖^𝐐·𝑧_𝑞,𝐾 =𝐖^𝐊·𝑧_0,𝐴_𝑡2𝑝=Softmax(𝑄𝐾^⊤/√(d)),where 𝐴_𝑡2𝑝∈ℝ^N_t × N_i represents the token-to-patch attention map. Furthermore, the 𝐴_𝑡2𝑝 matrix serves a dual purpose: similarity identification and region selection. Consequently, our aligner naturally supports mask-based query. In practice, we can manually assign a mask 𝑀 to 𝐴_𝑡2𝑝 for mask-guided generation with null-text query inputs. Following Eq. (<ref>), we can proceed to reweight 𝐴_𝑡2𝑝 using the predefined mask 𝑀 to highlight selected regions, ensuring our SSR-Encoder focuses solely on the selected valid regions of reference images. Detail-preserving subject encoder.Following most of the preceding methods <cit.>, we employ a pre-trained CLIP visual backbone to extract image representations from reference images. However, the conventional practice of extracting visual embeddings z_0 from the last CLIP layer does not align with our objective of preserving fine details to the maximum extent. Our preliminary experiments[Detailed in the supplementary.] have identified a notable loss of fine-grained details in the semantic image features z_0. Addressing this, we introduce the detail-preserving subject encoder, which extracts features across various layers to preserve more fine-grained details. Formally, the visual backbone processes an image 𝐼 to produce multi-scale detailed image features 𝑧_𝐼 = {𝑧_𝑘}_k=0^K, where 𝑧_0 represents semantic visual embedding used in T2P aligner and 𝑧_𝑘 represents other detailed visual embeddings at the scale of k in CLIP visual backbone and K refers to the number of target scales. We set K to 6 in all experimental settings.To fully leverage the benefits of multi-scale representation, we adopt separate linear projections 𝐖_𝐤^𝐕 for image feature 𝑧_𝑘 at different scales. Combining with the token-to-patch attention map 𝐴_𝑡2𝑝, the subject embeddings 𝑐_𝑠={𝑐_𝑠^𝑘}^K_k=0 are computed as per Eq. (<ref>):𝑉_𝑘=𝐖^𝐕_𝐤·𝑧_𝑘,𝑐_𝑠^𝑘=𝐴_𝑡2𝑝𝑉^⊤_𝑘,where 𝑐^𝑘_𝑠 denotes subject embedding at scale of k. Our SSR-Encoder now enables to capture multi-scale subject representation 𝑐_𝑠={𝑐_𝑠^𝑘}_k=0^K, which are subsequently used for subject-driven image generation via subject-conditioned generation process.§.§ Subject Conditioned GenerationIn our approach, 𝑐_𝑠 is strategically projected into the cross-attention layers of the U-Net. This is achieved through newly added parallel subject cross-attention layers, each corresponding to a text cross-attention layer in the original U-Net. Rather than disturbing the text embedding 𝑐_𝑡, these new layers independently aggregate subject embeddings 𝑐_𝑠. Inspired by works like <cit.>, we employ trainable copies of the text cross-attention layers to preserve the efficacy of the original model. The key and value projection layers are then adapted to train specifically for a subject-conditioned generation.To full exploit of both global and local subject representation, we concatenate all 𝑐_𝑠^𝑘 at the token dimension before projection, i.e. 𝑐_𝑠^'=concat( 𝑐_𝑠^𝑘, dim=0), where 𝑐_𝑠^𝑘∈ℝ^N_q× D_i represents subject representation at the scale of k.The output value 𝑂 of the attention layer is formulated as follows:𝑂 =CrossAttntion(𝐐, 𝐊, 𝐕, 𝑐_𝑡, 𝑥_𝑡)_text condition+λCrossAttntion(𝐐, 𝐊_𝐒, 𝐕_𝐒, 𝑐_𝑠^', 𝑥_𝑡)_subject condition,where 𝑐_𝑡 represents the text embedding and 𝑥_𝑡 represents the latent. 𝐐, 𝐊, 𝐕 represents query, key, and value projection layers in the original text branch respectively while 𝐊_𝐒, 𝐕_𝐒 represents trainable copies of key and value projection layers for concatenated subject embedding 𝑐_𝑠. We set λ to 1 in all experimental settings if not otherwise specified.By our subject-conditioned generation, text-to-image diffusion models can generate target subjects conditioned on both text embeddings and subject embeddings.§.§ Model Training and Inference During the training phase, our model processes paired images and texts from multimodal datasets. The trainable components include the token-to-patch aligner and the subject cross-attention layers.In contrast to CLIP, which aligns global image features with global text features, our token-to-patch aligner demands a more granular token-to-patch alignment. To achieve this, we introduce an Embedding Consistency Regularization Loss L_reg. This loss is designed to enhance similarity between the subject embeddings 𝑐_𝑠 and the corresponding query text embedding 𝑧_𝑞, employing a cosine similarity function as demonstrated in Eq. (<ref>): 𝑐_𝑠 = Mean( 𝑐_𝑠^0, 𝑐_𝑠^1, ..., 𝑐_𝑠^𝐾),ℒ_reg = Cos(𝑐_𝑠, 𝑧_𝑞) = 1 - 𝑐_𝑠·𝑧_𝑞/|𝑐_𝑠||𝑧_𝑞|,where 𝑐_𝑠 is the mean of subject embeddings and 𝑧_𝑞 represents the query text embeddings.As illustrated in Fig. <ref>, our T2P Aligner, trained on a large scale of image-text pairs, can effectively align query text with corresponding image regions. This capability is a key aspect of selective subject-driven generation.Similar to the original Stable diffusion model, our training objective also includes the same ℒ_LDM loss, as outlined in Eq. (<ref>):ℒ_LDM(θ)=𝔼_𝑥_0, t, ϵ[ϵ-ϵ _θ(𝑥_𝑡, t, 𝑐_𝑡, 𝑐_𝑠)_2^2],where 𝐱_𝐭 is the noisy latent at time step t, ϵ is the ground-truth latent noise. ϵ_θ is the noise prediction model with parameters θ.Thus, our total loss function is formulated as:ℒ_total=ℒ_LDM + τℒ_reg,where τis set as a constant, with a value of 0.01. As depicted in Fig. <ref> (in the last column), the inclusion of ℒ_reg significantly enhances the text-image alignment capabilities of the SSR-Encoder. This improvement is evident in the generated images, which consistently align with both the subject prompt and the details of the reference image.During inference, our method has the ability to decompose different subjects from a single image or multiple images. By extracting separate subject embeddings for each image and concatenating them together, our SSR-Encoder can seamlessly blend elements from multiple scenes. This flexibility allows for the creation of composite images with high fidelity and creative versatility. § EXPERIMENT §.§ Experimental SetupTraining data. Our model utilizes the Laion 5B dataset, selecting images with aesthetic scores above 6.0. The text prompts are re-captioned using BLIP2. The dataset comprises 10 million high-quality image-text pairs, with 5,000 images reserved for testing and the remainder for training.Implementation details. We employed Stable Diffusion V1-5 as the pre-trained diffusion model, complemented by the pre-trained CLIP text encoder. For training, images are resized to ensure the shortest side is 512 pixels, followed by a center crop to achieve a 512×512 resolution, and sent to the stable diffusion. The same image is resized to 224×224 and sent to the SSR encoder. The model underwent 1,000,000 iterations of training on 8 H800s GPUs, with a batch size of 16 per GPU and a learning rate of 1e-4. Inference was performed using DDIM as the sampler, with a step size of 30 and a guidance scale set to 7.5.§.§ Evaluation MetricsTo evaluate our model, we employ several metrics and datasets: * Multi-subject bench: We created a benchmark with 100 images, each containing 2-3 subjects.* DreamBench datasets <cit.>: This dataset includes 30 subjects, each represented by 4-7 images. For a comprehensive comparison with state-of-the-art (SOTA) methods, we employed the following metrics: DINO Scores<cit.> and DINO-M Scores to assess subject alignment, CLIP-T <cit.> for evaluating image-text alignment, CLIP Exclusive Score (CLIP-ES) to measure the exclusivity of subject representation, and the Aesthetic Score <cit.> to gauge the overall quality of the generated images.Notably, CLIP-ES is calculated by generating an image I using prompts for subject A from a reference image and evaluating the CLIP-T score with a different subject B and I. A lower CLIP-ES score indicates higher exclusivity. The DINO-M score, specifically designed for multiple subjects, evaluates identity similarity between masked versions of input and generated images, as detailed in <cit.>. Both CLIP-ES and DINO-M scores are evaluated on the Multi-Subject Bench.§.§ Comparison Methods For a comprehensive evaluation of our method, we benchmarked it against a range of state-of-the-art (SOTA) techniques. The methods we compared are categorized based on their approach to fine-tuning. In the fine-tuning-based category, we include Textual Inversion <cit.>, Dreambooth <cit.>, and Break-a-Scene <cit.>. For fine-tuning-free methods, our comparison encompassed Reference Only <cit.>, Elite <cit.>, IP-adapter <cit.>, and BLIPDiffusion <cit.>. This selection of methods provides a diverse range of approaches for a thorough comparative analysis with our SSR-Encoder. §.§ Experiment Results Quantitative comparison.Table <ref> presents our quantitative evaluation across two benchmarks: the Multi-Subject Bench and DreamBench. Overall, SSR-Encoder clearly outweighs previous SOTA finetuning-free methods on all of the metrics, including subject alignment, image-text alignment, subject exclusivity, and overall quality. Remarkably, it also outperforms fine-tuning-based methods in image quality and image-text alignment within both benchmarks. Particularly in the Multi-Subject Benchmark, the SSR-Encoder demonstrates outstanding performance in subject exclusivity, markedly outperforming competing methods. This highlights the efficacy of its selective representation capability and editability. While Dreambooth excels in subject alignment within the DreamBench dataset, the SSR-Encoder and Break-A-Scene show comparable performance on the Multi-Subject Bench. This suggests that although Dreambooth is highly effective in capturing detailed subject information, SSR-Encoder achieves a balanced and competitive performance in subject representation. Qualitative comparison.Fig. <ref> displays the high-fidelity outcomes produced by the SSR-Encoder using diverse query inputs, affirming its robustness and zero-shot generative capabilities. The SSR-Encoder demonstrates proficiency in recognizing and focusing on common concepts, ensuring an accurate representation of the selected image subjects. Its seamless integration with other customized models and control modules further solidifies its significant role in the stable diffusion ecosystem.In qualitative comparisons, as depicted in Fig. <ref>, Textual Inversion and Reference Only encounter difficulties in maintaining subject identity. Dreambooth, IP-adapter, and BLIP-Diffusion, although advanced, exhibit limitations in effectively disentangling intertwined subjects. Break-A-Scene achieves commendable subject preservation but at the cost of extensive fine-tuning. ELITE, with its focus on local aspects through masks, also faces challenges in consistent identity preservation.In contrast, our SSR-Encoder method stands out for its fast generation of selected subjects while adeptly preserving their identities. This capability highlights the method's superior performance in generating precise and high-quality subject-driven images, thereby addressing key challenges faced by other current methods. Ablation study. Our ablation study begins with visualizing the attention maps generated by our Token-to-Patch Aligner, as shown in Fig. <ref>. These maps demonstrate how different text tokens align with corresponding patches in the reference image, evidencing the Aligner's effectiveness.To evaluate the significance of various components, we conducted experiments by systematically removing them and observing the outcomes. Initially, we removed the subject condition, relying solely on the text condition for image generation, to determine if the subject details could be implicitly recalled by the base model. Subsequently, we trained a model without the embedding consistency regularization loss (L_reg) to assess its criticality. We also substituted our multi-scale visual embedding with a conventional last-layer visual embedding. The results of these experiments are depicted in Fig. <ref>. Our observations reveal that without subject conditioning, the generated subjects failed to correspond with the reference image. Omitting the multi-scale image feature resulted in a loss of detailed information, as evidenced by a significant drop in the DINO-M score. Discarding the embedding consistency regularization loss led to challenges in generating specific subjects from coexisting subjects, adversely affecting the CLIP-ES score. In contrast, the full implementation of our method demonstrated enhanced expressiveness and precision.Quantitative comparisons, as shown in Table <ref>, also indicate that our complete method achieves the best results across subject exclusivity and subject alignment. It slightly trails the original Stable Diffusion (SD) model only in text-image alignment.Substituting the multi-scale visual embedding significantly impacts image consistency, while excluding the embedding consistency regularization loss hampers text-image consistency.§ CONCLUSION In this paper, we introduced the SSR-Encoder, a groundbreaking finetuning-free approach for selective subject-driven image generation. This method marks a significant advancement in the field, offering capabilities previously unattainable in selective subject representation. At its core, the SSR-Encoder consists of two pivotal the token-to-patch aligner and the detail-preserving subject encoder. The token-to-patch aligner effectively aligns query input tokens with corresponding patches in the reference image, while the subject encoder is adept at extracting multi-scale subject embeddings, capturing fine details across different scales. Additionally, the incorporation of a newly proposed embedding consistency regularization loss further enhances the overall performance of the system. Our extensive experiments validate the SSR-Encoder's robustness and versatility across a diverse array of scenarios. The results clearly demonstrate the encoder's efficacy in generating high-quality, subject-specific images, underscoring its potential as a valuable tool in the open-source ecosystem. ieeenat_fullname § SUPPLEMENTARY § PRELIMINARIES§.§ Preliminary for Diffusion ModelsDiffusion Model (DM) <cit.> belongs to the category of generative models that denoise from a Gaussian prior 𝐱_𝐓 to target data distribution 𝐱_0 by means of an iterative denoising procedure. The common loss used in DM is:L_simple(θ) := 𝔼_𝐱_0, t, ϵ[ϵ-ϵ_θ(𝐱_𝐭, t)_2^2],where 𝐱_𝐭 is an noisy image constructed by adding noise ϵ∈𝒩(0,1)to the natural image 𝐱_0 and the network ϵ_θ(·) is trained to predict the added noise. At inference time, data samples can be generated from Gaussian noise ϵ∈𝒩(0,1) using the predicted noise ϵ_θ(𝐱_𝐭, t) at each timestep t with samplers like DDPM <cit.> or DDIM <cit.>.Latent Diffusion Model (LDM) <cit.> is proposed to model image representations in autoencoder’s latent space. LDM significantly speeds up the sampling process and facilitates text-to-image generation by incorporating additional text conditions. The LDM loss is:L_LDM(θ) := 𝔼_𝐱_0, t, ϵ[ϵ-ϵ_θ(𝐱_𝐭, t, τ_θ(𝐜))_2^2],where 𝐱_0 represents image latents and τ_θ(·) refers to the BERT text encoder <cit.> used to encodes text description 𝐜_𝐭.Stable Diffusion (SD) is a widely adopted text-to-image diffusion model based on LDM. Compared to LDM, SD is trained on a large LAION <cit.> dataset and replaces BERT with the pre-trained CLIP <cit.> text encoder. §.§ Preliminary for CLIPCLIP <cit.> consists of two integral components: an image encoder represented as F(x), and a text encoder, represented as G(t). The image encoder, F(x), transforms an image x with dimensions ℝ^3 × H × W (height H and width W) into a d-dimensional image feature f_x with dimensions ℝ^N × d, where N is the number of divided patches. On the other hand, the text encoder, G(t), creates a d-dimensional text representation gt with dimensions ℝ^M × d from natural language text t, where M is the number of text prompts. Both encoders are concurrently trained using a contrastive loss function that enhances the cosine similarity of matched pairs while reducing that of unmatched pairs. After training, CLIP can be applied directly for zero-shot image recognition without the need for fine-tuning the entire model.§ DESIGNING CHOICE OF IMAGE ENCODERIn this section, we conduct a preliminary reconstruction experiment to demonstrate that vanilla image features fail to capture fine-grained representations of the target subject and verify the effectiveness of our method. We first introduce our experimental setup and evaluation metrics in Sec. <ref>. Subsequently, we explain the implementation details of each setting in Sec. <ref>. Finally, we conduct qualitative and quantitative experiments in Sec. <ref> to prove the superiority of our proposed methods compared to previous works. §.§ Experimental SetupIn our image reconstruction experiment, we investigate four types of image features. The details are as shown in Fig. <ref>:* Setting A: CLIP Image Features. In this setting, we employ the vanilla CLIP image encoder to encode the input image and utilize the features from the final layer as the primary representation for subsequent reconstruction.* Setting B: DINOv2 Image Features. Analogous to setting A, we replace the CLIP image encoder with the DINOv2 encoder to extract the features.* Setting C: Fine-tuned CLIP Image Features. With the goal of recovering more fine-grained details while preserving text-image alignment, we fine-tune the last layer parameters of the CLIP image encoder using a CLIP regularization loss.* Setting D: Multi-scale CLIP Image Features. Instead of fine-tuning, we resort to using features from different scales of the CLIP backbone as the image representations. To verify the effectiveness of our methods, we employ the following metrics: Perceptual Similarity (PS) <cit.> and Peak Signal-to-Noise Ratio (PSNR) to assess the quality of reconstruction, CLIP-T <cit.> and Zero-Shot ImageNet Accuracy (ZS) <cit.> to access the preservation of text-image alignment in image encoder variants.As for data used in our preliminary experiments, we utilize a subset of LAION-5B <cit.>. This dataset comprises approximately 150,000 text-image pairs for training and a further 10,000 text-image pairs designated for testing.§.§ Implementation Details We use OpenCLIP ViT-L/14 <cit.> and DINOv2 ViT-L/14 <cit.> as the image encoders and all images are resized to 224×224 for training.The model underwent 100,000 training iterations on 4 V100 GPUs, using a batch size of 32 per GPU. We adopt the Adam optimizer <cit.> with a learning rate of 3e-4 and implement the one-cycle learning scheduler.To better preserve the pre-trained weights, we set the learning rate of the image encoder as 1/10 of the other parameters if fine-tuning is required. We adopt the same architecture of the VAE decoder in LDM <cit.> with an extra upsampling block and employ nearest interpolation to obtain the final reconstruction results. We adopt L_2 reconstruction loss in all our settings and additionally employ L_clip when fine-tuning the CLIP encoder.§.§ Experiment Results Qualitative results.To demonstrate the effectiveness of our method, we present reconstruction results in Fig. <ref>. It is observed that vanilla CLIP image features and DINOv2 features only result in rather blurry outcomes. By contrast, both fine-tuned CLIP image features and multi-scale CLIP image features manage to retain more details. Specifically, multi-scale CLIP image features is able to generate sharp edges without obvious degradations. Consequently, we infer that multi-scale features are more competent at preserving the fine-grained details we require. Quantitative results.The quantitative results are shown in Table <ref>. In terms of reconstruction quality, it's noteworthy that both the fine-tuned CLIP image features and multi-scale CLIP image features are adept at producing superior outcomes, exhibiting lower perceptual similarity scores and higher PSNR. This indicates that these features are more representative than either vanilla CLIP image features or DINOv2 features. However, despite the assistance from CLIP regularization loss, fine-tuned CLIP image features still suffer significant degradation in text-image alignment, which fails to meet our requirements. Consequently, we opt for multi-scale features as our primary method for extracting subject representation. § DETAILS OF COMPARISON EXPERIMENTS §.§ Details of Compared Methods* Finetune-based Methods: * Textual Inversion <cit.>: A method to generate specific subjects by describing them using new “words" in the embedding space of pre-trained text-to-image models.* Dreambooth <cit.>:A method of personalized image generation by fine-tuning the parameters in diffusion U-Net structure.* Break-A-Scene <cit.>: Aims to extract a distinct text token for each subject in a single image, enabling fine-grained control over the generated scenes.* Finetune-free Methods: * Reference only <cit.>: Guide the diffusion directly using images as references without training through simple feature injection.* ELITE <cit.>: An encoder-based approach encodes the visual concept into the textual embeddings for subject-driven image generation.* IP-adapter <cit.>: Focuses on injecting image information without fine-tuning the base model.* BLIPDiffusion <cit.>: Combines BLIP's language-image pretraining with diffusion models.These methods were chosen for their relevance and advancements in the field, providing a robust frame of reference for evaluating the performance and innovations of our SSR-Encoder.§.§ Details of Implementation In order to achieve a fair comparison, all the methods are implemented using the official open-source code based on SD v1-5 and the official recommended parameters. For the Multi-subject bench, all the methods use a single image as input and utilize different subjects to guide the generation. We provide 6 different text prompts for each subject on each image and generate 6 images for each text prompt. For Dreambench, we follow <cit.> and generate 6 images for each text prompt provided by DreamBench.§ USER STUDY We conducted a user study to compare our method with DB, TI, Break-A-Scene, ELITE, and IP-adapter perceptually. For each evaluation, each user will see one input image with multiple concepts, two different prompts for different concepts, and 5 images generated by each prompt and each method. 60 evaluators were asked to rank each generated image from 1 (worst) to 5 (best) concerning its selectivity, text-image alignment, subject alignment, and generative quality. The results are shown in Table. <ref> indicate that our method outperforms the comparison methods in generative quality and better balances subject consistency and text-image alignment.§ HUMAN IMAGE GENERATION Despite the SSR-Encoder not being trained in domain-specific settings (such as human faces), it is already capable of capturing the intricate details of the subjects. For instance, similar to the method outlined in <cit.>, we utilize face images from the OpenImages dataset <cit.> as reference images for generating human images. Fig. <ref> showcases samples of the face images we generated. To better illustrate our results, we also employ images of two celebrities as references.§ EXAMPLES OF EVALUATION SAMPLES In this section, we present more evaluation samples in our method on two different test datasets: Multi-Subject bench and DreamBench bench in Fig. <ref>, Fig. <ref>, and Fig. <ref>. Moreover, we present more qualitative comparison results in Fig. <ref>. As illustrated in the figure, our approach is more adept at focusing on the representation of distinct subjects within a single image, utilizing a query to select the necessary representation. In contrast to other methods, our method does not result in ambiguous subject extraction, a common issue in finetune-based methods. For instance, in the Dreambooth row from Fig. <ref>, two subjects frequently appear concurrently, indicating a low level of selectivity. When considering selectivity, generative quality, and text-image alignment, our SSR-Encoder surpasses all methods and achieves the level of finetune-based methods in terms of subject alignment. § DETAILS OF OUR TRAINING DATA AND THE MULTI-SUBJECT BENCH * Details of training data.Our model utilizes the Laion 5B dataset<cit.>, selecting images with aesthetic scores above 6.0. The text prompts are re-captioned using BLIP2 <cit.>. The dataset comprises 10 million high-quality image-text pairs, with 5,000 images reserved for testing and the remainder for training. Clearly, the distribution of training data has a significant impact on our model. The more a particular type of subject data appears in the training data capt, the better our performance on that type of subject. Therefore, we further analyze the word frequency in the training data caption and report the most frequent subject descriptors in the table<ref>.* Details of multi-subject bench.The Multi-subject Bench comprises 100 images from our test data. More specifically, the data is curated based on the caption associated with each image from our test set. An image progresses to the next stage if its caption contains at least two subject descriptors. Subsequently, we verify the congruence between the caption and the image. If the image aligns with the caption and adheres to human aesthetic standards, it is shortlisted as a candidate image. Ultimately, we meticulously selected 100 images from these candidates to constitute the Multi-subject Bench. § COMPATIBILITY WITH CONTROLNET Our SSR-Encoder can be efficiently integrated into controllability modules. As demonstrated in Fig. <ref>, we present additional results of amalgamating our SSR-Encoder with ControlNet <cit.>. Our approach can seamlessly merge with controllability modules, thereby generating controllable images that preserve consistent character identities in alignment with reference images. § COMPATIBILITY WITH ANIMATEDIFF Our SSR-Encoder is not only versatile enough to adapt to various custom models and controllability modules, but it can also be effectively applied to video generation, integrating seamlessly with video generation models. In Fig. <ref>, we demonstrate the impact of combining our SSR-Encoder with Animatediff <cit.>. Despite not being trained on video data, our method can flawlessly combine with Animatediff to produce videos that maintain consistent character identities with reference images.§ BROADER IMPACT Our method in subject-driven image generation holds significant potential for advancing the field of text-to-image generation, particularly in creating personalized images. This technology can be applied across various domains such as personalized advertising, artistic creation, and game design, and can enhance research at the intersection of computer vision and natural language processing. However, while the technology has numerous positive applications, it also raises ethical and legal considerations. For instance, generating personalized images using others' images without appropriate permission could infringe upon their privacy and intellectual property rights. Therefore, adherence to relevant ethical and legal guidelines is crucial. Furthermore, our model may generate biased or inappropriate content if misused. We strongly advise against using our model in user-facing applications without a thorough inspection of its output and recommend proper content moderation and regulation to prevent undesirable consequences.§ LIMITATION Due to the uneven distribution of the filtered training data, we found that the fidelity will be slightly worse for some concepts that are uncommon in our training data. This can be addressed by increasing the training data. We plan to address these limitations and extend our approach to 3D generation in our future work. | http://arxiv.org/abs/2312.16272v1 | {
"authors": [
"Yuxuan Zhang",
"Jiaming Liu",
"Yiren Song",
"Rui Wang",
"Hao Tang",
"Jinpeng Yu",
"Huaxia Li",
"Xu Tang",
"Yao Hu",
"Han Pan",
"Zhongliang Jing"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231226143911",
"title": "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation"
} |
APS/123-QEDLawrence Berkeley National Laboratory, Berkeley, CA 94720, USA [Current affiliation: ]Centro Euro-Mediterraneo sui Cambiamenti Climatici, 40127 Bologna, Italy Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany [Corresponding author: ][email protected] Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USAThis paper introduces a novel formulation of the Particle-In-Cell (PIC) method for the modeling of relativistic plasmas, which leverages the ability of the Pseudo-Spectral Analytical Time-Domain solver (PSATD) to handle arbitrary time dependencies of the charge and current densities during one PIC cycle.The new formulation is applied to a modified set of Maxwell's equations that was proposed earlier in the context of divergence cleaning, and to recently proposed extensions of the PSATD-PIC algorithm. Detailed analysis and testings revealed that, under some condition, the new formulation can expand the range of numerical parameters under which PIC simulations are stable and accurate when modeling relativistic plasmas such as, e.g., plasma-based particle accelerators. PIC-JR_hom: a pseudo-spectral Particle-In-Cell formulation with arbitrary charge and current densities time dependencies for the modeling of relativistic plasmas Jean-Luc Vay January 14, 2024 ======================================================================================================================================================================§ INTRODUCTION Simulations of relativistic plasmas often rely on the electromagnetic particle-in-cell (PIC) method <cit.>, with variations of the method that have been proposed and are chosen based on the application. For the modeling of plasma-based accelerators <cit.>, a variation that has gained in popularity uses the “infinite-order” (in space and time) Pseudo-Spectral Analytical Time-Domain (PSATD) method <cit.>, instead of the (almost universally adopted) second-order (in space and time) Finite-Difference Time-Domain (FDTD) “Yee” method <cit.>, to solve Maxwell's equations at discrete points in space and time. In contrast to the Yee solver, the PSATD solver offers no numerical dispersion and no Courant condition on the field solve. Extensions of the PSATD PIC method includes the use of finite-order spatial stencils <cit.>, alternating nodal-staggered representations of the field quantities during one PIC loop <cit.>, time-averaging of the fields gathered onto the particles <cit.>, and integration of the equations in a Galilean frame moving at a given velocity <cit.>. The combination of the Galilean PSATD PIC (also labeled as Galilean PIC for convenience) method with the other extensions has led to stable modeling of plasma accelerators, free of the numerical Cherenkov instability (NCI) <cit.>when using the Lorentz boosted frame method to speed up simulations <cit.>. In some cases, however, the method, which relies on the user setting a predefined Galilean velocity, can become inaccurate when it cannot be assumed that the local plasma velocity is close to that predefined velocity. As a possible remedy, this paper introduces and starts exploring a novel formulation of the PIC algorithm where the standard assumption that the current density that is produced by the particles is constant over a time step is relaxed. The reminder of the paper is organized as follows. The formulation of the novel algorithm is derived first, followed by the presentation of its finite-order stencil, alternating nodal-staggered and time-averaged extensions. The connection between the new algorithm and the Galilean PIC formulation is discussed next. The effectiveness of the new algorithm at mitigating the NCI is then explored theoretically and numerically on a simple uniform plasma case. Finally, the new scheme is tested in simulations of laser-plasma accelerators in a Lorentz boosted frame.§ NOVELPIC-JRHOM ALGORITHM§.§ Presentation of the algorithm The following modified system of Maxwell's equations is considered /t= c^2×-/_0+c^2F ,/t= -× , F/t= ·-ρ/_0 .In addition to the usual Maxwell-Faraday and Ampère-Maxwell equations, the system contains an extra equation for the scalar field F, which propagates deviations to Gauss' law. (Note that, in the case where Gauss' law is verified in the PIC simulation, Eq. (<ref>) leads to F=0, and Eqs. (<ref>),(<ref>) reduce to the standard Maxwell's equations.) These additional terms were introduced in <cit.> from the potential formulation in the Lorentz gaugeand used as a propagative divergence cleaning procedure, as an alternate to the Langdon-Marder <cit.> or Marder <cit.> diffusive ones. This type of divergence cleaning was also proposed independently and analyzed more formally in <cit.>. A connection to the formulation of Eqs. (<ref>) in potential form, derived more formally than in <cit.>, is instructive and given in Appendix <ref>.While the abovementioned earlier work <cit.> considered this formulation in the context of the standard PIC method using FDTD discretization of Eqs. (<ref>), this article focuses on the PSATD <cit.> discretization of Eqs. (<ref>), where the equations are integrated analytically over one timestep, in Fourier space. The expression of (<ref>) in Fourier space reads /t= ic^2×-/_0+ic^2F ,/t= -i× , F/t= i·-ρ/_0 ,where f denotes the Fourier transform of function f. The analytical integration of Eqs (<ref>) in time requires an assumption on the time dependency of the current and charge densitiesand ρ over the integration interval, i.e., over a timestep that goes from t=nΔ t to t=(n+1)Δ t. In the standard PSATD algorithm <cit.>,is assumed to constant in time, and ρ is assumed to be linear in time, within a given timestep Δ t.This paper considers more general time dependencies forand ρ within one timestep, which is divided into m subintervals of equal size δ t = Δ t/m. During these subintervals,and ρ are considered to be either piecewise constant, piecewise linear, or piecewise quadratic in time. This is illustrated in Fig. <ref>. In the rest of this paper, the notation “PIC-JR_hom” is used, where J and R_ho (J,R_ho ∈{C (constant), L (linear), Q (quadratic)}) indicate the (piecewise) time dependency of the current densityand charge density ρ, respectively, and m is the number of subintervals. For example, “PIC-LL2“ refers to the novel PIC algorithm with linear time dependency of bothand ρ and 2 subintervals. Note that, in this notation, “PIC-CL1“ refers to the standard PSATD PIC algorithm <cit.>, whereis constant and ρ is linear in time over one time step.More specifically: * When ρ(t) is assumed to be piecewise constant: macroparticles deposit their charge density in the middle of each time subinterval, i.e., at t_n+(ℓ+1/2)/m≡ nΔ t + (ℓ+1/2)δ t with ℓ∈[0,m-1], and ρ is then assumed to be constant in each subinterval:ρ(t) = ρ^n+(ℓ+1/2)/m, t∈[ nΔ t + ℓδ t, nΔ t + (ℓ+1)δ t]. * When ρ(t) is assumed to be piecewise linear: macroparticles deposit their charge density at the edge of each time subinterval, i.e., at t_n+ℓ/m≡ nΔ t + ℓδ t, with ℓ∈[0,m], and ρ is then assumed to be linear in each subinterval:ρ(t) = ρ^n+(ℓ+1)/m-ρ^n+ℓ/m/δ t(t-t_n+(ℓ+1/2)/m) + ρ^n+(ℓ+1)/m+ρ^n+ℓ/m/2,t∈[ nΔ t + ℓδ t, nΔ t + (ℓ+1)δ t]. * When ρ(t) is assumed to be piecewise quadratic: macroparticles deposit their charge density at the middle and edge of each time subinterval, i.e., at t_n+(ℓ+1/2)/m with ℓ∈[0,m-1] and t_n+ℓ/m with ℓ∈[0,m]. ρ is then assumed to be quadratic in each subinterval:ρ(t) = 2(ρ^n+(ℓ+1)/m-2ρ^n+(ℓ+1/2)/m+ρ^n+ℓ/m)/δ t^2(t-t_n+(ℓ+1/2)/m)^2 + ρ^n+(ℓ+1)/m-ρ^n+ℓ/m/δ t(t-t_n+(ℓ+1/2)/m) + ρ^n+(ℓ+1/2)/m, t∈[ nΔ t + ℓδ t, nΔ t + (ℓ+1)δ t],with similar definitions for , when (t) is assumed to be piecewise constant, piecewise linear, or piecewise quadratic respectively.Overall, the time dependency ofand ρ can thus be expressed, for t∈[ nΔ t + ℓδ t, nΔ t + (ℓ+1)δ t], with ℓ∈ [0, m-1], as: (t) = 2_^τ/δ t^2(t-t_n+(ℓ+1/2)/m)^2+_^τ/δ t(t-t_n+(ℓ+1/2)/m)+_^τ ,ρ(t) = 2a_ρ^τ/δ t^2(t-t_n+(ℓ+1/2)m)^2+b_ρ^τ/δ t(t-t_n+(ℓ+1/2)m)+c_ρ^τ ,where the coefficients of the polynomials are given in Table <ref>.It is important to note that the particles' momenta are not updated during one time step, i.e., the proposed scheme does not involve subcycling of the macroparticles motion. As in standard PSATD PIC, macroparticles move in straight line from their known position at t_n=nΔ t to time t, using their known momentum at t_n+1/2:x(t) = x^n + p^n+1/2/m√(1+ (p^n+1/2/mc)^2)(t-t_n)where x^n and p^n+1/2 follow the standard leap-frog time stepping that is commonly used in PIC simulations. Thus, here, even though the charge and current density may be deposited several times per timestep Δ t, the macroparticles' momentum p is only updated once per timestep, and therefore the fields E and B are gathered onto macroparticles to update p only once per timestep also.Using the piecewise definition of ρ andgiven in Eqs. (<ref>), Eqs. (<ref>) can be integrated analytically over one timestep Δ t, i.e., from t=nΔ t to t=(n+1)Δ t. In practice, this is done by sequentially integrating these equations over each subinterval ℓ∈ [0,m-1]: ^n+(ℓ+1)/m= C^n+ℓ/m+ic^2S/ck×^n+ℓ/m+ic^2S/ckF^n+ℓ/m+1/_0 ck(Y_3_J+Y_2_J-S_J)+ic^2/_0 c^2k^2(Y_1a_ρ-Y_5b_ρ-Y_4c_ρ) , ^n+(ℓ+1)/m= C^n+ℓ/m-iS/ck ×^n+ℓ/m-i/_0 c^2k^2×(Y_1_J-Y_5_J-Y_4_J), F^n+(ℓ+1)/m= CF^n+ℓ/m+iS/ck·^n+ℓ/m+i/_0 c^2k^2·(Y_1_J-Y_5_J-Y_4_J)+1/_0 ck(Y_3a_ρ+Y_2b_ρ-Sc_ρ)where C= cos(ckδ t),S = sin(ckδ t), Y_1 = (1-C)(8-c^2k^2δ t^2)-4Sckδ t/2 c^2 k^2 δ t^2,Y_2 = 2(C-1)+ S ckδ t /2 ckδ t, Y_3 = S(8- c^2k^2δ t^2 ) - 4ckδ t(1+C)/2c^2 k^2 δ t^2,Y_4= (1-C),Y_5 = (1+C) ckδ t - 2S/2ck δ t.The steps of the novel PIC-JR_hom cycle with sub-timestepping are summarized in the diagram shown in Figure <ref>. §.§ ExtensionsAs shown in Refs. <cit.>, the PSATD PIC algorithm can be extended to (a) arbitrary-order spatial stencils, (b) a scheme that alternates between nodal and staggered representations of the field components on the simulation grid, and (c) a scheme that averages the fields to be gathered over one timestep. Such extensions are presented in the next sections for the PSATD PIC-JR_hom algorithm. §.§.§ Extension to finite-order stencilsWhen using domain decomposition to run PSATD PIC methods on parallel computers, it is advantageous to alter the wave vector in the Fourier representation of the equations so as to emulate a finite-difference approximation of the spatial derivatives at a finite order p, since this enhances the locality of the field solvers and thus reduces the required number of guard cells around each subdomain <cit.>.The modified [k^p_u] at order p along the direction u=x,y,z are then given by[k^p_u]_nodal = ∑_j=1^p/2[α_j^p]_nodalsin(k_u j Δ u)/j Δ u,u=x,y,z, [k^p_u]_staggered = ∑_j=1^p/2[α_j^p]_staggeredsin(k_u (j-1/2) Δ u)/(j-1/2) Δ u,u=x,y,z,for a nodal and staggered representation, respectively, with the following Fornberg coefficients <cit.>:[α_j^p]_nodal = (-1)^j+12 [(p/2)!]^2/(p/2-j)!(p/2+j)!, [α_j^p]_staggered = (-1)^j+1[p!/2^p (p/2)!]^2 4/(2j-1)(p/2-j)!(p/2+j-1)!. These modified wave numbers can be readily used with the PIC-JR_hom algorithm to limit the number of guard cells and enable efficient parallel simulations, just as with other flavors of PSATD PIC algorithms <cit.>. §.§.§ Extension to alternating nodal-staggered gridsJust like the standard and averaged formulations of PSATD PIC, the PIC-JR_hom algorithm can readily adopt the “hybrid nodal-staggered” scheme presented in <cit.> where the field alternate between nodal and staggered representations on the simulation grid. More precisely, the Maxwell solve and guard cell exchanges are performed on a staggered “Yee” grid while the charge/current depositions and fields gather are performed with field quantities on a separate nodal grid. This “hybrid” alternating nodal-staggered extension allows to retain the advantages of low numerical dispersion and compact stencils of the integration of Maxwell's equations on astaggered grid with the stability associated with the interpolation of fields onto the particles from a nodal grid <cit.> (esp. for NCI-prone boosted-frame simulations).The application of the “hybrid” alternating nodal-staggered scheme to PIC-JR_hom leads to the steps shown in Figure <ref>. §.§.§ Extension to the time-averaged PSATD PIC algorithmIn Refs. <cit.>, an extension to PSATD PIC, named time-averaged PSATD PIC (also labeled as averaged PIC for convenience), is presented that enables stable boosted-frame simulations even when the time step is larger than the Courant condition along a given axis, e.g., cΔ t=Δ z > Δ x. With the time-averaged algorithm, the field quantities that are gathered onto the particles are given by time averages of the fields on the grid obtained by analytically integrating theandfields from t=nΔ t to t=(n+2)Δ t. The time-averaged PIC-JR_hom algorithm consists of the steps shown in Figure <ref>, wherethe analytical average ofandat time t=(n+1)Δ t are, ⟨^n+1⟩ = 1/2Δ t∑_ℓ=0^2m-1[ S/ ck ^n+ℓ/m +i c^2 Y_4/c^2k^2×^n+ℓ/m+ i Y_4/2ckδ tF^n+ℓ/m + 1/ε_0c^2k^2(Y_1 _J^τ - Y_5_J^τ- Y_4_J^τ )- i c^2 × (Y_6 a_ρ^τ + Y_7 b_ρ^τ+ Y_8c_ρ^τ )],⟨^n+1⟩ = 1/2Δ t∑_ℓ=0^2m-1[ S/ck^n+ℓ/m - i Y_4/c^2k^2×^n+ℓ/m+ i × (Y_6 _J^τ + Y_7 _J^τ+ Y_8_J^τ )]. For a detailed derivation see Appendix <ref>. §.§ Relation to the Galilean PSATD PIC algorithm This section examines the relationship between the Galilean PIC algorithm, the standard PSATD PIC algorithm and the new PIC-JR_hom algorithm. To this end, it is instructive to “deconstruct” the Galilean PIC algorithm by separating it in two independent steps: (i) a shift of the quantities to recenter them on a grid moving at , (ii) the integration of the PSATD equations assuming that the current source is constant along the flow moving at the Galilean velocity .The standard Galilean PIC scheme <cit.> can then be written highlighting terms that arise from step (i) in red in Eqs.( <ref>)-(<ref>) and those from step (ii) in blue in Eqs.( <ref>)-(<ref>) and Eq.( <ref>): ^n+1= C θ^2^n - iS/ω × (θ^2^n) + i X_1×θ^n+1/2 ,^n+1= C θ^2^n + i c^2S/ω × (θ^2^n) + X_4θ^n+1/2 + i( X_3θ^2ρ^n - X_2ρ^n+1) ,where the coefficients X_1, X_2, X_3 and X_4 are defined asX_1 := 1/_0 (ω^2 - Ω^2 )(θ^* - θC + iΩ θ S/ω),X_2 := c^2/θ^* - θ(θ^* χ_1/_0ω^2- θ1 - C/_0ω^2),X_3 := c^2/θ^* - θ(θ^* χ_1/_0ω^2- θ^* 1 - C/_0ω^2),X_4 := iΩX_1 - θ/_0S/ω ,with χ_1 := ω^2/ω^2 - Ω^2(θ^* - θC + iΩ θ S/ω), where Ω := ·, ω := c k,C := cos(ω ), S := sin(ω ), θ := e^i Ω / 2 and θ^* := e^- i Ω / 2.When setting =0, the system (<ref>)-(<ref>) converges to the standard PSATD algorithm, as expected. Step (i), which corresponds to the multiplication of some of the terms by θ or θ^2, in red in (<ref>)-(<ref>), is the easiest to interpret: noting that a multiplication by θ := e^i Ω / 2 in Fourier space corresponds to shifting the terms spatially by the distance /2 in real space, the terms known at time n+1/2 are multiplied by θ, hence shifted by /2 while the terms known at time n are multiplied by θ^2, hence shifted by . These are exactly the shifts that are needed to bring the corresponding quantities to their new grid location after one time step, when assuming a Galilean frame of reference moving at .Understanding the terms associated with step (ii) requires a more detailed comparisons between how the standard and the Galilean PIC equations are obtained. The standard PSATD algorithm is derived assuming that the current density (source term) is constant over one time step on a fixed grid. The Galilean algorithm makes the same assumption but in a Galilean frame, i.e., that the current density (source term) is constant over one time step on a Galilean grid. Following this comparison, it flows logically that step (ii) ought to correspond to an integration of the PSATD equations on a fixed grid assuming that the currents are constant along a segment of length Δ t. Indeed, it was verified that integrating the PSATD equations based on these assumptions leads to the system (<ref>)-(<ref>) with the terms highlighted in red replaced by 1 in (<ref>)-(<ref>).From this, it follows that the new algorithm PIC-JR_hom is related to step (ii) of the Galilean PIC algorithm in the following way. Whilestep (ii) of Galilean PIC provides a more accurate analytical integration of the PSATD equations over one time step for a flow that moves uniformly at v_gal, the PIC-JR_hom, with its arbitrary time-dependence ofand ρ and its subintervals, provides a more accurate analytical integration of the PSATD equations over one time step for an arbitrary local flow of particles. The new PIC-JR_hom algorithm can thus be viewed as a possible generalization of step (ii) of the Galilean PIC algorithm. Indeed, the numerical tests discussed below show that, like the Galilean PIC algorithm, PIC-JR_hom can lead to simulations that are very stable with regard to the numerical Cherenkov instability, and that it can also remain accurate in cases where the Galilean assumption is becoming less appropriate. § NUMERICAL TESTSThis section presents various physics applications to test the novel PIC-JR_hom algorithm. All simulations and results have been performed and obtained with the open-source electromagnetic PIC code WarpX <cit.>. The current implementation provides the flexibility to: * choose an arbitrary polynomial time dependency ofand ρ among the following combinations: *and ρ constant in time (CCm);*constant in time and ρ linear in time (CLm);*and ρ linear in time (LLm);*and ρ quadratic in time (QQm); * choose the number of subintervals m within one time step; * turn on/off the divergence cleaning term, that is, solve Maxwell's equations (<ref>) with or without the scalar field F;* turn on/off the time averaging of theandfields gathered on the macro-particles, as in (<ref>).To assess the stability of the novel PIC-JR_hom method theoretically, the analytical dispersion equation was derived (see appendix). This allows to predict the growth rates of the numerical Cherenkov instability in the case of a uniform drifting plasma. Moreover, a variety of WarpX simulation tests were run to further investigate the method's stability and accuracy. These tests include: 2D simulations of a uniform plasma drifting with a relativistic velocity _0 (with/without divergence cleaning, with/without subintervals, and with small/large time steps) and 3D simulations of laser wakefield acceleration (LWFA).§.§ Stability of a uniform plasma drifting at relativistic velocity This section presents WarpX simulations of a uniform electron-proton plasma with density n_0 = ϵ_0 m_e c^2 γ_0 / e^2 (where ϵ_0 is the permittivity of free space, c is the speed of light in free space, and e and m_e are respectively the electron charge and mass), drifting along z with a relativistic velocity _0 = (0,0,v_0), with v_0 = c √(1 - 1/γ_0^2) and Lorentz factor γ_0 = 130, through a two-dimensional computational domain with x_min = z_min = -6.45μm and x_max = z_max = 6.45μm, periodic boundary conditions and 600 × 200 grid cells along x and z, respectively.The simulations were performed with 4 particles per cell, per species, 1 pass of bilinear filter in the transverse direction x and 4 passes in the longitudinal direction z (the direction along which the plasma is drifting). Four cases were considered:* PIC-JR_hom with cΔ t=Δ x=Δ z without divergence cleaning (Figure <ref>);* PIC-JR_hom with cΔ t=Δ x=Δ z with divergence cleaning (Figures <ref> and <ref>);* averaged PIC-JR_hom with cΔ t=6Δ x=Δ z with divergence cleaning (Figure <ref>);* PIC-JR_hom with cΔ t=Δ x=Δ z and averaged PIC-JR_hom with cΔ t=6Δ x=Δ z, with divergence cleaning and finite order stencils (Figure <ref> and <ref>);and are discussed below in detail. * PIC-JR_hom with cΔ t=Δ x=Δ z without divergence cleaning Figure <ref> shows the total electromagnetic energy as a function of ω_p,r t = ω_p t /√(γ_0) = √(e^2n_0/(m_eϵ_0)) obtained from WarpX simulations using PIC-JR_hom with CCm, LLm, QQm, CLm and LQm, for m=1,2,5,10, without divergence cleaning.In this case, increasing the order of the polynomial dependency (from C, L to Q), or timestep subintervals (m>1), helps delaying the onset of the instability and lowering the growth rate.When using the same time dependency forand ρ (CC, LL and QQ), for a given number of depositions per step, it is more advantageous to increase the order of the polynomial than to increase the subintervals number m. Conversely, when using a different time dependency forand ρ (CL, LQ), it is more advantageous to increase the number of subintervals m than to increase the order of the polynomial.Matching the time dependency ofand ρ (as in CC, LL, QQ) is also increasing stability, with PIC-LL5 and PIC-QQ2 being more stable than PIC-LQ10.* PIC-JR_hom with cΔ t=Δ x=Δ z with divergence cleaningFigure <ref> shows the total electromagnetic energy as a function of ω_p,r t obtained from WarpX simulations using PIC-JR_hom with CCm, LLm, QQm, CLm and LQm for m=1,2,5,10, with divergence cleaning.The energy history from a simulation using the Galilean PIC algorithm <cit.> is also plotted for comparison. In contrast to the previous case, when divergence cleaning is used, having the same time dependency forand ρ leads to an extraordinary level of stabilitythat is comparable to the one of the Galilean PSATD method. Conversely, turning on the divergence cleaning degrades significantly the stability when using different time dependencies forand ρ (CL and LQ). The remarkable stability reported in Figure <ref> when matching the time-dependencies is confirmed with a theoretical NCI analysis. Figure <ref> shows the NCI growth rates, Im(ω)/ω_p,r, obtained from theoretical calculations and WarpX simulations for the Galilean PIC, the standard PSATD PIC (CL1), PIC-CC1 and PIC-LL1, with an excellent agreement between theory and simulations. A detailed derivation of the two-dimensional dispersion equation for the PIC-JR_hom scheme,for time dependencies ofand ρ up to quadratic, is presented in Appendix <ref>, clarifying the origin of the remarkable stability that is observed with PIC-CC1, PIC-LL1 and PIC-QQ1. As explained in the appendix, it can be shown that under some conditions that include having the same time dependency forand ρ, key terms cancel out in the analysis matrix, leading to stable real solutions of the determinant.* Averaged PIC-JR_hom with cΔ t=6Δ x=Δ z with divergence cleaningIn this test, the transverse cell size is intentionally set to a much smaller value than the longitudinal cell size, as typical in plasma accelerator simulations in a Lorentz boosted frame of reference <cit.> with a high Lorentz factor γ_0 <cit.>, while keeping the time step at the CFL limit of the longitudinal cell size: cΔ t=Δ z=6Δ x. The results from Figure <ref> show that this case is more challenging for all schemes, and even the averaged Galilean PIC scheme is not stable beyond 1000 plasma periods. Increasing the order of the polynomial and the number of subintervals m both help delaying the onset and lowering the growth rate of the instability, slowly for CLm but quite effectively for CCm, LLm and QQm, with increasing the number of subintervals m being the most effective strategy for a given number of depositions per time step.* PIC^p-JR_hom with cΔ t=Δ x=Δ z and averaged PIC^p-JR_hom with cΔ t=6Δ x=Δ z, with divergence cleaning and finite order stencils This test shows numerical (Figure <ref>) and theoretical (Figure <ref>) evidence that using a stencil at finite-order p with PIC^p-LLm leads to a degradation of the stability that increases as the order p decreases. This is because the NCI resonant modes, caused by temporal and spatial aliasing, depends on the stencil order:[k_x,res^p] = √(([k_z^p]v_0/c + m_z2π/Δ zv_0/c - 2π m_t/cΔ t)^2 - [k_z^p]^2),for any m_z, m_t ∈ℤ, where m_z is the spatial alias index and m_t is the temporal alias index <cit.>. As the stencil order gets lower, such resonant modes relocate to lower wavenumbers where the resonance is stronger, as can be seen onFigure <ref> that shows the theoretical NCI growth rate at different spectral orders, p=8,16,32. A non-zero growth rate is observed solely along the NCI resonant mode that is caused by aliasing between the temporal m_t=0 and spatial m_z=0 modes.The results fromindicate that the choice of stencil order will depend on the total duration of the simulations (as measured in plasma periods) for a given application. §.§Laser-plasma acceleration This section demonstrates the extension of the stability properties observed in the uniform plasma cases to realistic 3D simulations of laser wakefield acceleration (LWFA) <cit.>. In these runs, a Gaussian laser pulse with amplitude a_0=1.7, duration τ = 73.3fs and waist w_0 =50μm is injected at the entrance of a parabolic plasma channel with a background density n_0=10^18 cm^-3 on axis. The simulations are run in a Lorentz boosted frame of reference <cit.> with γ_0 = 60 using the novel PIC^16-JR_hom scheme (stencil order p=16 in all directions) with a hybrid alternating nodal-staggered grids (using field and current centering of order 16 in all directions) <cit.>.Similarly to the uniform plasma case, a bilinear filter was applied to the current and charge densities at each time step, with 4 passes in the z direction and 1 pass in the other directions. The simulations were run on the Oak Ridge Leadership Computing Facility (OLCF) supercomputer Summit using 24 nodes (144 GPUs), with domain decomposition along both x and z, using 24 guard cells in each direction. The longitudinal resolution (in the boosted frame) was set to Δ z = (1+β_0)γ_0 λ_lab /24 = 4.08μm, where β_0=√(1-1/γ_0^2) and λ_lab=0.8μm is the driving laser wavelength in the laboratory frame, while the transverse resolution was Δ x = 0.68μm, so that Δ z = 6 Δ x. Simulations were also performed with thestandard and averaged Galilean PIC^16 algorithm <cit.> for reference.Figure <ref> displays snapshots of the longitudinal electric field E_zfrom simulations running the Galilean PIC^16 and the PIC^16-JR_ho algorithms at time t=28.05 ps (which corresponds to ω_p,rt=84.3) with different simulation time steps: (a) cΔ t=Δ x=Δ z/6, (b) cΔ t=3Δ x=Δ z/2 and (c) cΔ t=6 Δ x=Δ z. Figure <ref> shows the corresponding lineouts at x=0 for a selection of runs. Table <ref> compares the performance of the various runs in each case.When using the “small” time step cΔ t = Δ x = Δ z/6, the PIC^16-CC1 algorithm is as effective as the standard Galilean PIC^16 algorithm for mitigating the NCI instability (which is emerging at the end of the second stage in the simulations using PIC^16-CL1),with around 20% speedup. For larger time steps cΔ t = 3Δ x = Δ z/2 and cΔ t = 6Δ x = Δ z,although the averaged Galilean PIC^16 method is stable, it does not produce accurate physics results, leading to a very diminished amplitude of the electric field in the second stage. Instead, the novel averaged PIC^16-JR_hom method is stable and produces accurate results provided that the numbers of deposition and the number of timestep subintervals are high enough.For cΔ t = 3Δ x = Δ z/2, both PIC^16-CC1 and PIC^16-LL1 are stable and accurate, with respective speedups of approximately 1.9x and 1.5x as compared to the Galilean reference case with small time steps.For cΔ t = 6Δ x = Δ z, both PIC^16-CC1 and PIC^16-LL1 are unstable, while PIC^16-CC2, PIC^16-LL2 and PIC^16-QQ1 are stable and accurate, with respective speedups of approximately 2.1x, 1.7x and 1.8x as compared to the Galilean reference case with small time steps.These results show that the PIC^p-JR_hom method is effective, efficient and versatile for controlling the numerical Cherenkov instability in plasma accelerator simulations, both in cases for which other methods (e.g., averaged Galilean PIC) apply as well, and in other cases that happen to be more challenging for the other methods. § CONCLUSION A novel formulation of pseudo-spectral analytical time-domain Particle-In-Cell algorithm is proposed and analyzed. The formulation includes an additional term of “hyperbolic divergence cleaning” and a relaxation of the standard assumption of constant time dependency of the current density over one time step. Extensions of the algorithm to finite-order stencils, alternating nodal-staggered grids and time-averaging over a time step were also presented.Tests and analyses revealed that assuming the same time dependency for the evolution of the charge and current densities over one time step leads to excellent stability with regard to the numerical Cherenkov instability. Detailed analysis of the dispersion relation of the new algorithm (see Appendix <ref>) provides a hint that explains the stability. The new algorithm is found to be effective, efficient and versatile for controlling the numerical Cherenkov instability in plasma accelerator simulations, both in cases for which other methods (e.g., Galilean PIC) apply and, more importantly, in other cases that happen to be more challenging for the other methods. A possible extension of the algorithm for this particular application could be to incorporate the Galilean PIC algorithm in each subinterval, which should provide enhanced stability while preserving the versatility of the new scheme. While the application of the new algorithm to the modeling of plasma acceleration as proven successful, the application to other domains must be explored with care. For example, initial testings of the application of the method to the modeling of relativistic plasma shocks <cit.> has led to the observation of unphysical effects that have been tentatively attributed to unphysical coupling between the unphysical longitudinal electric field waves associated with divergence cleaning (from the term F in Eq. <ref>) and the plasmas. Further studies are needed to fully understand the underlying mechanisms and propose possible remedies. equationsection § CONNECTION BETWEEN THE MODIFIED SYSTEM OF MAXWELL'S EQUATIONS AND A POTENTIAL FORMULATIONequationsectionIt is instructive to derive the modified system of Maxwell's equations (<ref>) in its potential form, starting with /t= c^2×-/_0+c^2F ,/t= -× , F/t= ·-ρ/_0 ,· = 0 . Eq. (<ref>) implies thatcan be derived from a potential =× which, when inserted in Eq. (<ref>), gives ×(+/ t)=0.This means that +/ t can be written as the gradient of a potential Φ, giving= -Φ - / t .Plugging (<ref>) in (<ref>) and (<ref>) leads to ^2 Φ +/ t (·)= -ρ/ε_0- F/ t ,^2- ^2/c^2 t^2 - (· + 1/c^2Φ/ t) = - μ_0+F , which, choosing Φ andthat verify the Lorentz gauge + 1/c^2Φ/ t=0, gives ^2 Φ -^2Φ/c^2t^2 = -ρ/ε_0- F/ t ,^2- ^2/c^2t^2= - μ_0 +F,^2 F -^2 F/c^2t^2 = μ_0 (·+ρ/ t) . A gauge transformation'= -Λ,ϕ'= ϕ+1/c^2Λ/ t, withF= (-^2+1/c^2^2/ t^2)Λ= -(' + Φ'/c^2t) then leads to ^2 Φ' -^2Φ'/c^2t^2 = -ρ/ε_0 ,^2 ' - ^2'/c^2t^2= - μ_0 . This is consistent with the derivation given in <cit.>, where Eqs. (<ref>)- (<ref>) were derived from Maxwell's equations in the Lorentz gauge form (i.e., the form of (<ref>)- (<ref>)) with the assumption that =_0+δ where _0 is the portion ofthat verifies the continuity equation ρ/t+_0=0, and defining F=-δ such that '=+δ with + Φ'/c^2t=0. In addition to showing that the term F can arise from considerations other than a “divergence cleaning” term, this derivation also highlights how F relates more directly to the continuity equation via Eq. (<ref>) and gauges via Eq. (<ref>). § DERIVATION OF THE PIC-JRHOM EQUATIONSequationsectionWe first rewrite Eqs. (<ref>) in an equivalent second-order differential form,∂^2 /∂ t^2 +c^2 k^2= -1/ε_0( ∂/∂ t + i c^2 ρ) ∂^2 /∂ t^2 +c^2 k^2= 1/ε_0 i ×∂^2F/∂ t^2 + c^2 k^2F= -1/ε_0( ∂ρ/∂ t +) and then sequentially integrate them analytically over each subinterval [ nΔ t + ℓδ t, nΔ t + (ℓ+1)δ t], ℓ∈ [0,m-1] with δ t=Δ t/m, assuming that the current and charge densities are piecewise functions of time, given by Eqs. (<ref>)-(<ref>). Each of those equations can be expressed in the following generalized form with a right part as time polynomial up to order two: ( ∂^2/∂ t^2 +c^2 k^2)f= ∑_j=0^2a_0j t^jwhere {a_0j}_j=0^2 are known coefficients for any given f = , ,F. The general solution of such a second-order PDE equation with constant coefficients is: f(t)= C_1cos(ω(t-t_n+ℓ/m)) +C_2sin(ω(t-t_n+ℓ/m)) + 1/ω^2( C_3 (t-t_n+(ℓ+1/2)m)^2 +C_4(t-t_n+(ℓ+1/2)m) + C_5),where {C_k}_k=1^5 are integration coefficients to be determined. The coefficients C_k with indexes k=3,4,5 for any given f = , ,F can be determined by solving a system of linear equations, obtained from substitution of Eq. (<ref>) into the corresponding Eq. (<ref>) and calculated at time steps t_n+ℓ/m, t_n+(ℓ+1/2)/m and t_n+(ℓ+1)/m. While the remaining coefficients C_1 and C_2 can be determined from the initial conditions f(t)|_t_n+ℓ/m and ∂_t f(t)|_t_n+ℓ/m, respectively, C_1= f(t_n+ℓ/m) - ( C_3 (δ t/2)^2 + C_4 (δ t /2) + C_5 )/ω^2 , C_2=∂_t f(t_n+ℓ/m) - ( 2 C_3 (δ t/2)+C_4 ) / ω^2. The expression of the field components f (t_n+(ℓ+1)/m) at the next time subinterval are then given by: f (t_n+(ℓ+1)/m) = C_1cos(ωδ t ) +C_2sin(ωδ t) + 1/ω^2( C_3 ( δ t /2) ^2 +C_4 (δ t / 2)+ C_5). § DERIVATION OF THE AVERAGED PIC-JRHOM EQUATIONSequationsectionThe notation ⟨ f(t)⟩^n+1 is introduced to refer to the average of any given function f(t) over the time interval [nΔ t, (n+1)Δ t] as, ⟨f⟩^n+1 = 1/2 Δ t∫_t_n^t_n+2Δ t f(t')t' ,wheref = , . For any given number of subintervals m, the integral in Eq.( <ref>) can be split into a sum over 2m integrals over [t_n+ℓδ t, t_n + (ℓ+1)δ t],ℓ=0,..,2m-1 as, ⟨f⟩^n+1 = 1/2 Δ t∑_ℓ=0^2m-1∫_t_n +ℓδ t^t_n+(ℓ+1)Δ t f(t')t',wheref = , .The averaged ⟨⟩ and ⟨⟩ fields are obtained through sequential integration of Eq. (<ref>) over each subinterval[t_n+ℓδ t, t_n+(ℓ+1)δ t],ℓ=0,..,2m-1 and then susbstituted into Eq. (<ref>),∫_t_n+ℓδ t ^t_n+(ℓ+1)δ t(t')t' =S/ ck ^n+ℓ/m +i c^2 Y_4/c^2k^2×^n+ℓ/m+ i Y_4/2ckδ tF^n+ℓ/m + 1/ε_0c^2k^2(Y_1 _j - Y_5_J- Y_4_J )- i c^2 × (Y_6 a_ρ + Y_7 b_ρ+ Y_8c_ρ ),∫_t_n+ℓδ t ^t_n+(ℓ+1)δ t(t')t' = S/ck^n+ℓ/m - i Y_4/c^2k^2×^n+ℓ/m+ i × (Y_6 _J + Y_7 _J+ Y_8_J ),withY_6 = 1/6 ε_0 c^5 k^5 δ t^2((ckδ t)^2 - 3 δ (ckδ t)^2 S -12 ck δ t (1+C)+24 S ), Y_7 = 1/2 ε_0 c^4 k^4 δ t(ckδ t S + 2 (C-1) ), Y_8 = δ t/ε_0 c^2 k^2(1 - S/ckδ t). § DISPERSION RELATION FOR THE PIC-JRHOM ALGORITHMequationsectionequationsectionThe 2D dispersion relation for Eqs. (<ref>) is derived to analyze the algorithm's stability with respect to the numerical Cherenkov instability (NCI), for a uniform plasma flowing through a periodic grid along the z-axis with a velocity _0 = (0,0,v_0), where v_0 = c ( 1 - 1/γ_0^2 )^1/2.Following the analysis from <cit.>, we consider the discretized perturbed Vlasov equation, expressed in Fourier space: δf̂^n+1/2(_m,)e^i_m·Δ t/2 -f̂^n-1/2(_m,)e^-i_m·Δ t/2 +qΔ t Ŝ(_m)[^n() + ×^n() ] ·∂ f_0/∂ = 0, where f_0 =n_0 δ ( - mγ_0_0) is the distribution function of the uniform plasma in a state of equilibrium, and δ f is a perturbation to f_0.The discretized formulas for the deposited current and charge in Fourier space at any time ℓΔ t, ℓ∈ [n,n+1] centered around δf̂^n+1/2, are given by ^ℓ()= ∑_mS(_m)∫d qδf̂^n+1/2(_m,) e^-i _m· (ℓ-(n+1/2))Δ t,ρ^ℓ()= ∑_mS(_m)∫d q δf̂^n+1/2(_m,) e^-i _m· (ℓ-(n+1/2))Δ t. Then, assuming the same e^-iω t time evolution for , ,F, , ρ and δf with the following anzatz, ^n() = () e^-iω nΔ t ,δf̂^n+1/2(_m,) =δf̂(_m,) e^-iω (n+1/2)Δ t ,^n() = () e^-iωnΔ t ,ρ^n() = ρ() e^-iω nΔ t .equation (<ref>) yields δf̂(_m, )= -i q Δ t/2 S(_m)()+×()/sin((ω-_m·)Δ t/2). Substituting the Vlasov equation (<ref>) into (<ref>)-(<ref>) gives the following expressions for the deposited current () and the charge ρ(): = ick ε_0/T(ξ_0 +(·)/c),ρ= i kε_0 /T (· ), ()= ()+ ×()- (· () )/ c^2,ξ_0 = Tω_p^2/γ_0 ck∑_m=-∞^+∞S^2(k_m) ·1/2/Δ ts_ω', = Tω_p^2/γ_0 k∑_m=-∞^+∞S^2(k_m) ·k_m c_ω'/(2/Δ ts_ω')^2 , where T = ∏_i[ 1-sin(k_i Δ i/2) ] is one pass of abinomial smoothing operator, and ω_p = (n_0 q^2 m_e^-1ε_0^-1)^1/2 is the plasma frequency, and S(_m) is the particle shape factor. Still following <cit.>, Eqs. (<ref>) are then rewritten in the time-symmetrical form (^n+(ℓ+1)/m - ^n+ℓ/m)=iS/(1+C)ck c^2 ×(^n+(ℓ+1)/m) +^n+ℓ/m) + i S/(1+C)ck c^2 ( F^n+(ℓ+1)/m +F^n+ℓ/m) + 1/ε_0 ω( Y_9_j - 2S (1+C)^-1_j ) -i c^2/ε_0 c^2k^2 Y_10b_ρ,(^n+(ℓ+1)/m - ^n+ℓ/m)= - S/(1+C)ck i ×(^n+(ℓ+1)/m+^n+ℓ/m) + i×_j /ε_0 c^2 k^2Y_10 ,(F^n+(ℓ+1)/m - F^n+ℓ/m) = S/(1+C)cki(^n+(ℓ+1)/m + ^n+ℓ/m) - 1/ε_0 c^2 k^2 i _j Y_10 + 1/ε_0 ck( Y_9 a_ρ -2S(1+C)^-1c_ρ).Substitution of equations (<ref>) in Eqs.(<ref>)-(<ref>) gives s_ω= - t_ckc_ω× c - c_ω t_ck c F + i(Y_9ã_ω^τ / 2 -t_ckc̃_ω^τ)+ ( Y_10b̃_ω^τ / 2 )ρ, s_ω=t_ck c_ω× - (Y_10b̃_ω^τ / 2) ×, s_ω c F = - t_ck c_ω· +· (Y_10b̃_ω^τ / 2 ) + i (Y_9 ã_ω^τ / 2 - t_ckc̃_ω) ρ . Projecting Eqs. (<ref>) and (<ref>) along x and z and Eqs. (<ref>) along y gives the following 2D dispersion equation in matrix form:^T = 0, 10.511 = [-s_ω 0 c_ωk_z t_ck-c_ωk_x t_ck i T χ_τ_J 0 -iT k_x ψ_τ_ρ;; 0-s_ω-c_ωk_x t_ck-c_ωk_z t_ck 0 i T χ_τ_J-iT k_zψ_τ_ρ;; c_ωk_z t_ck-c_ωk_x t_ck-s_ω 0 iT k_zψ_τ_J-iT k_xψ_τ_J 0;;-c_ωk_x t_ck-c_ωk_z t_ck 0-s_ω-iT k_xψ_τ_J-iT k_zψ_τ_J i T χ_τ_ρ;;i/Tξ_0 0-i/Tξ_0β_0 0-1 0 0;;i/Tξ_x β_0 i/T (1-β_0^2)(ξ_0 + ξ_zβ_0) -i/Tξ_x β_0^2 0 0-1 0;;i/Tξ_x i/Tξ_z(1-β_0^2) -i/Tξ_x β_0 0 0 0-1; ]where = ( E_x , E_z, cB_y, cF, J_x/(c kε_0), J_z/(c kε_0), ρ/(kε_0)) and = /k is the normalized wave vector. The matrix coefficients inthat depend on the time dependency of the current and charge densitiesand ρ are summarized in Table <ref>. For example, the upper index τ_J/ρ in the coefficients ψ_τ_J/ρ and χ_τ_J/ρ indicates the time dependency ofand ρ and can be constant (C), linear (L) or quadratic (Q).The other coefficients are given by: c_ω = cos(ωδ t/2), s_ω = sin(ωδ t/2), t_ω = s_ω /c_ω, c_ω'= cos((ω - _m ·) Δ t/2 ), s_ω'= sin((ω-_m ·) Δ t/2 ),_m=+ 2π m/Δ, m ∈ Z, t_ck = tan(ckδ t/2), Y_9= t_ck (8-c^2k^2δ t^2)-4ckδ t/(1+C)(ckδ t)^2, Y_10 = (1-2t_ck/ckδ t),χ_τ =Y_9 ã_ω^τ - t_ckc̃_ω^τ,ψ_τ = Y_10b̃_ω^τ.The dispersion relation is given by computing the determinant of() using the Sarrus rule. Interestingly, when the charge and current densities have the same temporal dependency, e.g., with CC, LL or QQ, the determinant simplifies to the straightforward expression () = α_1 α_2, where α_1= T^3[ξ_0(β_0 k_z (χ_τ c_ω t_ck + ψ_τ s_ω) - (χ_τ s_ω + ψ_τ c_ω t_ck) ) + (c_ω^2 t_ck^2-s_ω^2) ],α_2= (c_ω^2 t_ck^2 - s_ω^2)+ (1-β_0^2)[(ξ_x k_x + ξ_z k_z) (χ_τ c_ω t_ck + ψ_τ s_ω)+ ψ_τ c_ω t_ck(ξ_0 + ξ_z β_0) + χ_τ (ξ_z c_ωβ_0 + ξ_0 s_ω)].Here, such simplification is possible due to the presence of similar terms of opposite sign that cancel each other when the charge and current densities have the same time dependency. For example, terms like (ψ_τ_J)^2 k_x k_z c_ω^2 -ψ_τ_Jψ_τ_ρk_x k_z c_ω^2 = 0, since ψ_τ_J = ψ_τ_ρ = ψ_τ (τ_= τ_ρ=τ).Moreover, at the asymptotic limit, assuming that (i) δω = ω-_m _0 is small and (ii) considering an ultra-relativistic regime, e.g., β_0=v_0/c=1,the determinant equation reduces to: ξ_0( k_z (χ_τ c_k_mv_0 t_ck + ψ_τ s_k_mv_0) - (χ_τ s_k_mv_0 + ψ_τ c_k_mv_0 t_ck) ) + (c_k_mv_0^2 t_ck^2-s_k_mv_0^2) = 0where c_k_m v_0 = cos(_m δ t/2), s_k_m v_0 = sin(_m δ t/2), and ξ_0^τ is proportional to 1/δω and reads ξ_0^τ = Tω_p^2 S^2(_m)/γ_0 ck1/δω + Tω_p^2/γ_0 ck∑_j=-∞,m ≠ j^+∞S^2(k_j) ·1/2/Δ ts_k_j v_0' = α_m/δω + β_m. Finally, we obtain a first order equation for δω with real coefficients, δω = - α_m( k_z (χ_τ c_k_mv_0 t_ck + ψ_τ s_k_mv_0) - (χ_τ s_k_mv_0 + ψ_τ c_k_mv_0 t_ck) ) /β_m( k_z (χ_τ c_k_mv_0 t_ck + ψ_τ s_k_mv_0) - (χ_τ s_k_mv_0 + ψ_τ c_k_mv_0 t_ck) ) + (c_k_mv_0^2 t_ck^2-s_k_mv_0^2) . It follows that, under the assumptions (i)-(ii), the determinant has only real coefficients, δω is real, and the algorithm is stable. This research used the open-source particle-in-cell code WarpX (<https://github.com/ECP-WarpX/WarpX>). We acknowledge all WarpX contributors. This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This research was performed in part under the auspices of the U.S. Department of Energy by Lawrence Berkeley National Laboratory under Contract DE-AC02-05CH11231. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. The data that support the findings of this study are available from the corresponding author upon reasonable request. Olga Shapoval derived and implemented the algorithms in their final forms, performed the numerical analyses and numerical tests.Edoardo Zoni contributed to the derivation, implementation and testing of the algorithms. Rémi Lehe wrote the initial implementation of the algorithm in WarpX and discussed the results. Maxence Thévenet performed numerical tests of an early prototype implemented in the code Warp. Jean-Luc Vay proposed the concept and implemented an early prototype in the code Warp. | http://arxiv.org/abs/2312.16297v1 | {
"authors": [
"Olga Shapoval",
"Edoardo Zoni",
"Remi Lehe",
"Maxence Thevenet",
"Jean-Luc Vay"
],
"categories": [
"physics.plasm-ph",
"physics.acc-ph",
"physics.comp-ph"
],
"primary_category": "physics.plasm-ph",
"published": "20231226190142",
"title": "PIC-JR$_{ho}$m: a pseudo-spectral Particle-In-Cell formulation with arbitrary charge and current densities time dependencies for the modeling of relativistic plasmas"
} |
We make use of the cotangent complex formalism developed by Lurie to formulate Quillen cohomology of algebras over an operad in a general base category. Moreover, using the same machinery, we introduce a spectral Hochschild cohomology theory for enriched operads and algebras over them. We prove further that both the Quillen and Hochschild cohomologies of algebras over an operad can be controlled by the corresponding cohomologies of the operad itself. When passing to the category of simplicial sets, we assert that both these cohomology theories for operads, as well as their associated algebras, can be calculated in the same framework of spectrum valued functors on the twisted arrow ∞-category of the operad of interest. This assertion allows us to establish a connection between Hochschild and Quillen cohomologies of simplicial operads. Additionally, we formulate the Quillen principle for _n-algebras in simplicial sets, which provides a convenient fiber sequence relating the Hochschild and cotangent complexes of such objects. This determines an unstable-analogue of a significant result obtained by Francis and Lurie. Our strategy introduces a novel perspective, focusing solely on the intrinsic properties of the twisted arrow ∞-categories of operads.: A Dataset and Model for Advancing Robust Semantic Segmentation in Construction Environments Maghsood Salimi School of Innovation, Design and Engineering, Mälardalen University, Sweden Mohammad Loni Future Solutions Department, Volvo Construction Equipment, Sweden Sara AfsharFuture Solutions Department, Volvo Construction Equipment, Sweden Marjan SirjaniSchool of Innovation, Design and Engineering, Mälardalen University, Sweden Antonio CicchettiSchool of Innovation, Design and Engineering, Mälardalen University, Sweden ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================10.513 § INTRODUCTION In our prior work <cit.>, we have formulated Quillen cohomology of operads enriched over a general base category, thus expanding the work of Harpaz-Nuiten-Prasma <cit.> on Quillen cohomology of enriched categories. The aim of the present paper is to consider further Quillen cohomology of algebras over an operad, and in addition, to introduce a spectral Hochschild-cohomology theory specifically tailored for such objects. Another significant goal of the paper is to delve into exploring the cohomologies of simplicial operads and their associated algebras. More specifically, we aim to provide additional clarification while simultaneously broadening the scope of the results from [<cit.>, 6].§.§ Classical Hochschild and Quillen cohomologies The cohomology theories mentioned have garnered widespread attention in literature due to their versatility and broad applications. Starting from the deformations of various algebraic structures to the obstruction theory within homotopical algebra, through realms like representation theoryand noncommutative geometry, Hochschild and Quillen cohomologies stand as pivotal tools indispensable in the study of these theories.Hochschild cohomology (cf. <cit.>) was initially established as a fundamental cohomology theory for associative algebras.Let k be a commutative ring and let A be a k-algebra. First recall that the structure of a bimodule over A is equivalent to that of a (left) module over A⊗ A^. Conceptually, the n'th Hochschild cohomology group of A with coefficients in an A-bimodule N is given by^n(A;N) := ^n_A⊗ A^(A,N).Furthermore, the Hochschild-cohomology formalism can be applied in a natural fashion to the broader framework of dg categories(cf., e.g. <cit.>). Whilst Quillen cohomology (cf. <cit.>) is more universal. Let X be an object living in a model category . Consider the category (_/X) of abelian group objects over X. Let us assume that (_/X) comes equipped with a model structure transferred along the free-forgetful adjunction :*_/X(_/X) :. Then one defines the cotangent complex of X to be _X := (X) the derived abelianization of X. Now the n'th Quillen cohomology group of X with coefficients in some M ∈(_/X) is given by the formula^n(X;M) := π_0 ^_(_/X)(_X , M[n]) in which M[n] signifies the n-suspension of M in (_/X). We will refer to (_/X) as the abelianization ofat X.In practice, the above notion usually leads to canonical representations of the object of interest. For example, the abelianization of the category of groups at some group G agrees with the category of G-modules. An analogue holds for commutative algebras, and for Lie algebras as well. On other hand, the abelianization ofk-algebras at A is nothing but the category of A-bimodules. More generally, if we letbe an operad in chain complexes and let B be a -algebra then the abelianization of -algebras at B is equivalent to the category B-modules over . Beyond their flexibility and diverse applications, what stands out remarkably is the intimate relationship between the two cohomologies. More precisely, it was observed by Quillen <cit.> that there is a fiber sequence of A-bimodules of the form_AA ⊗ AA in which A ⊗ A represents the free A-bimodule generated by k and the second map is classified by the unit k A of A. This is an important one because, while Quillen cohomology of A can be described via Hochschild cohomology of A,the latter is really accessible using the well known bar resolution.To broaden the range of examples for these two cohomology theories, covering not only algebraic objects but also those residing within a topological setting, we will need to use the tangent category formalism, as discussed in the next subsection.§.§ From tangent categories to spectral cotangent complexes Despite the successes, there are certain constraints within Quillen's theory (cf. <cit.>). In response, Lurie contributes to the further advancement of Quillen's abelianization through the stabilization of ∞-categories. Let X be an object living within a presentable ∞-category . The tangent category toat X is by definition _X := ((_/X)_*) the stabilization of (_/X)_*, i.e. the pointed ∞-category associated to the over category _/X. Due to the presentability of , the canonical functor _X_/X admits a left adjoint called the suspension spectrum functor, and denoted byΣ^∞_+ : _/X_X.In light of this, the (spectral) cotangent complex of X is defined to be _X := Σ^∞_+(X). Now, the Quillen cohomology of X with coefficients in some M ∈_X is the space^⋆(X;M):= __X(_X , M),and the n'th Quillen cohomology group is given by the formula ^n(X;M):= π_0 __X(_X , M[n]). For more details about the above constructions, we refer the readers to <cit.>. Additionally, according to the recent works of Harpaz-Nuiten-Prasma <cit.>, one may opt to work within the framework of model categories.§.§ Cohomologies of enriched operads and algebras over them Suppose we are given a sufficiently nice symmetric monoidal categoryequipped with a compatible homotopy theory. Letbe a C-colored operad enriched over . As per Quillen-Lurie thinking, a cohomology theory fordepends on only the choice of a homotopy theory on the category containing . What makes the situation intriguing is that 𝒫 resides in various noteworthy categories. Here are the ones of interest:∙ ∈(): the category of -enriched operads with non-fixed sets of colors,∙ ∈_C(): the category of C-colored operads in , ∙ ∈():the category of -bimodules, and∙ ∈():the category of infinitesimal -bimodules. In each case, an important invariant specific to 𝒫 will emerge, as discussed in what below.We will now endow () with the Dwyer-Kan homotopy theory, while the others are all equipped with the standard homotopy theory transferred from that on . First, we will refer to the Quillen cohomology of ∈() as the proper Quillen cohomology of . Besides that, the corresponding cohomology of ∈_C() will be called the reduced Quillen cohomology of .The next step is to decide what we mean by the (spectral) Hochschild cohomology of . Naively thinking, if we consideras a monoid(in the monoidal category _C() of C-collections) then it is natural to let ∈() represent the Hochschild complex of the operad , just as in the definition of Hochschild cohomology of associative algebras. Nevertheless, this falls short of being a good choice because the structure of operadic bimodules is not linear in general, and hence resulting in possible challenges in calculations. From this point of view, we will let the Hochschild complex of the operadbe classified by ∈(), based on the fact that infinitesimal -bimodules are exactly the linearization of the usual notion of -bimodules. Indeed, the structure of infinitesimal -bimodules is linear in the sense that the forgetful functor() _C()preserves all colimits as long as the monoidal product indistributes over colimits. Nonetheless, note that the category () remains unstable unlessis stable. Therefore, to assure that the obtained cohomology theory is abelian (i.e. the cohomology groups are all abelian), we will let the cotangent complex of ∈(), denoted by_^∈_(), represent the Hochschild complex of the operad . More precisely, the Hochschild cohomology ofwith coefficients in some M∈_() is the space ^⋆(;M):= ^__()(_^ , M).The above definition has full generality because when the base categoryis stable then we obtain an equivalence _() ≃(), which identifies _^ with ∈() itself.Now let A∈_() be an algebra over . It is important to note that A may also be regarded as an object of _^A the category of A-modules over . This latter notion provides canonical representations for A∈_(), just like the classical notions of bimodules over an associative algebra, modules over a group or a Lie algebra, etc. First, since we are mainly interested in the -algebra structure on A, it is obvious to exhibit the Quillen cohomology of A∈_() as the proper Quillen cohomology of A. Moreover, according to the logic as in the above paragraph, we will exhibit _A^∈_A_^A, i.e. the cotangent complex of A∈_^A, as the Hochschild complex of the -algebra A. Now the Hochschild cohomology of A with coefficients in some N∈_A_^A is the space^⋆(A;N):= ^__A_^A(_A^,N). Further discussion on the naturality of the operadic Hochschild cohomology proposed will be given in <ref>.§.§ Main statements We shall now summarize our main results, divided into three parts as follows.§.§.§ Formulations and comparison theorem It is extremely noteworthy that there exists a natural passage between infinitesimal -bimodules (resp. -bimodules) and A-modules over (resp. -algebras) performed by the adjunction (-) ∘_ A ⊣_A,- (see <ref>). Moreover, there is a commutative square of adjunctions between tangent categories:[row sep=3.5em, column sep =3.5em] 𝒯_() [r, shift left=1.5] [d, shift right=1.5, "(-)∘^_ A"']𝒯_() [l, shift left=1.5, "⊥"'] [d, shift left=1.5, "(-)∘^_ A"]_A_^A [r, shift right=1.5] [u, shift right=1.5, "⊣", "_A,-^"']_A_()[l, shift right=1.5, ""] [u, shift left=1.5, "⊢"', "_A,-^"]in which the vertical pairs are all induced by the adjunction (-)∘_ A ⊣_A,- (see Notation <ref>), and the horizontal pairs are given by the induction-restriction adjunctions. We will assume that the base categorycomes equipped with a compatible model structure. Under some mild assumptions on the triple (,,A), we obtain the following results.Let _C^n denote the C-collection given by ∅_ on all levels except that _C^n(c;c)=^n, i.e. the pointed n-sphere in , for every c∈ C. Moreover, we write ∘_C^∙∈_() for the prespectrum with(∘_C^∙)_n,n := ∘_C^n. This is in fact a suspension spectrum model for _^∈_(), i.e. the cotangent complex of ∈() (see Proposition <ref>). The following will play an active role. (<ref>, <ref>) There are weak equivalences_^∘^_ A ≃_A and_^∘^_ A ≃_A^ in _A_() and _A_^A respectively. Consequently, the cotangent complex _A ∈_A_() admits a suspension spectrum model: (∘_C^∙)∘^_ A ≃_A. Suppose further that the base categoryis stable. We have an extended commutative diagram[row sep=3.5em, column sep =3.5em] () [r, shift left=1.5, "≃"] [d, shift right=1.5, "(-)∘_ A"']𝒯_() [l, shift left=1.5, "⊥"'] [d, shift left=1.5, "(-)∘^_ A"] [r, shift left=1.5, "≃"] 𝒯_()[d, shift left=1.5, "(-)∘^_ A"] [l, shift left=1.5,"⊥"']_^A [r, shift right=1.5] [u, shift right=1.5, "⊣", "_A,-"']_A_^A[l, shift right=1.5, "", "≃" '] [u, shift left=1.5, "⊢"', "_A,-^"] [r, shift right=1.5]_A_()[u, shift left=1.5, "⊢"', "_A,-^"] [l, shift right=1.5, "", "≃" ']in which the horizontal pairs are all equivalences. We denote by _∈() with _(c_1,⋯,c_m;c) = (c_1,⋯,c_m;c)⊗_nΩ^n [(^n)^⊗ m×_1_𝒮^0 ]where the desuspension Ω(-) is taken in . The object _ is exactly the image of _^ through the identification 𝒯_()≃() (see Remark <ref>).There is a weak equivalence in _^A of the form _∘_ A ≃_A.Here we use the same notation to denote the image of _A ∈_A_() under the identification _A_()≃_^A. Consider the case =(k) with k being any commutative ring (see Examples <ref>). We can prove that the object _∈() is weakly equivalent to the infinitesimal composite product ∘_(1)_C (see <ref> and <ref> for notations). Furthermore, it can be shown that the object_∘_ A ≃ (∘_(1)_C)∘_ Ais a model for the module of Kähler differentials Ω_A∈_^A. Due to this, we may recreate a classical result in literature asserting that, in a differential graded setting, the cotangent complex of an operadic algebra is exactly equivalent to its module of Kähler differentials (cf., e.g. <cit.>). Detailed discussion regarding this topic will be included in a subsequent paper.The following comparison theorem proves that Hochschild and Quillen cohomologies of A can be encoded by the corresponding cohomologies of the operaditself.(<ref>, <ref>)Suppose given a fibrant object N∈_A_^A. (i) There is a natural weak equivalence ^⋆(A ; N)≃Ω^⋆( ; ^_A,N) between the Quillen cohomology of A with coefficients in N and the loop space of Quillen cohomology ofwith coefficients in ^_A,N∈𝒯_(). (ii) There is a natural weak equivalence ^⋆(A ; N)≃^⋆( ; ^_A,N) between the Hochschild cohomology of A with coefficients in N and Hochschild cohomology ofwith coefficients in ^_A,N.This theorem is proved using Proposition <ref>, and along with the assertion that under the identification _(𝒮) ≃_(), the cotangent complex _∈_(𝒮) is identified with _^[-1] ∈_() (see Proposition <ref>). When the base categoryis stable then we obtain weak equivalences that are similar to those above: ^⋆(A ; N)≃Ω^⋆( ; _A,N)and^⋆(A ; N)≃^⋆( ; _A,N)with the only difference being that N is now an object of _^A.§.§.§ Simplicial operads and algebras over themIn what follows, we will assume thatis the category _Δ of simplicial sets, and assume thatis a fibrant and Σ-cofibrant C-colored operad in _Δ. Recall from <cit.> that there is a chain of equivalences of ∞-categories _(_Δ)_∞≃__C(_Δ)_∞≃_()_∞≃_()_∞≃(() , )where () signifies the twisted arrow ∞-category ofanddenotes the ∞-category of spectra. Due to the above, both Quillen and Hochschild cohomologies ofcan be calculated in the framework of functors (). In the loc.cit, we have proved that _∈_(_Δ)_∞ is identified with _[-1] ∈(() , ) where the functor _ : () takes any operation μ∈ of arity m to _(μ) = 𝕊^⊕ m. Whilst the Hochschild complex _^∈_()_∞ is identified with 𝕊: () the constant functor on the sphere spectrum (see Proposition <ref>). Thus, after sending coefficients into (() , ), the two cohomologies ofare formulated as follows. Suppose given a functor: (). (i) (<cit.>) The Quillen cohomology ofwith coefficients inis computed by ^⋆( ; ) ≃_(() , )(_, [1]). (ii) (<ref>) Whilst the Hochschild cohomology ofwith coefficients inis given by ^⋆( ; ) ≃_(() , )(𝕊, ). In particular, the n'th Hochschild cohomology group is computed by ^n( ; ) ≅π_0_(() , )(𝕊, [n]) ≅π_-n lim. Next let A be a cofibrant -algebra classified by a map ℓ_A : _A of operads. The two cohomologies of A will take coefficients in _A_^A. For each levelwise Ω-spectrum N ∈_A_^A, we may associate to it a functor^_A,N : () .Explicitly, for each operation μ∈(c_1,⋯,c_m;c) regarded as an object of (), then ^_A,N(μ) is an Ω-spectrum such that^_A,N(μ)_k,k agrees with the fiber over ℓ_A(μ) of the simplicial map__Δ(A(c_1)×⋯× A(c_m) , N_k,k(c)) __Δ(A(c_1)×⋯× A(c_m) , A(c))that is induced by the structure map N_k,k(c) A(c) of N (see Construction <ref>).(<ref>) (i) The Quillen cohomology of A with coefficients in N is computed by ^⋆(A ; N) ≃Ω^⋆( ; ^_A,N) ≃_(() , )(_, ^_A,N). (ii) The Hochschild cohomology of A with coefficients in N is computed by ^⋆(A ; N) ≃^⋆( ; ^_A,N) ≃_(() , )(𝕊, ^_A,N). In particular, the n'th Hochschild cohomology group is simply given by ^n(A ; N) ≅π_-n lim^_A,N.§.§.§ Quillen principle for topological _n-algebrasWhile Quillen cohomology plays a central role in the study of deformation theory and obstruction theory, its actual computation is far from straightforward. In addressing this complexity, one approach is to establish a connection between Quillen cohomology and the notably more accessible Hochschild cohomology.This approach aligns directly with the Quillen principle that we have been developing. To be more precise, we will say that a given object of interest is subject to the Quillen principle if there exists a convenient fiber sequence relating the cotangent and Hochschild complexes of that object. The most basic example is the Quillen principle for associative algebras, as expressed by the fiber sequence (<ref>). Moreover, a variant of this principle for simplicial groups exists as well, according to <cit.>. (See also [<cit.>, 1.2] for more details).More generally, the Quillen principle for _n-algebras in stable categories was established by Francis <cit.> and Lurie <cit.>. Let us now recall their result. Letbe a stable presentable symmetric monoidal ∞-category whose monoidal product distributes over colimits, and let A be an _n-algebra in . Denote by _A^_n() the ∞-category of A-modules over _n. The stability of the base categoryimplies the existence of an equivalence of ∞-categories:_A__n() _A^_n(). (Francis <cit.>, Lurie <cit.>) There is a fiber sequence in _A^_n() of the form _A(1_) η_A A _A[n] in which _A(1_) signifies the free A-module over _n generated by the monoidal unit and the map η_A is classified by the unit of A.Note that A ∈_A^_n() represents the Hochschild complex of the _n-algebra A. Thus, the fiber sequence (<ref>) yields for each N∈_A^_n() a fiber sequence Ω^n ^⋆(A;N)^⋆(A;N)|N|in which |N|:= _(1_,N) represents the underlying space of N. We prove the topological-analogue of (<ref>) (i.e. when the base categoryis the ∞-category of spaces). The progression of the proof is outlined in the content below. Now suppose thatis a fibrant and Σ-cofibrant single-colored operad in _Δ. Moreover, we will assume that (0) ≃(1) ≃ *. Let us write 𝕀∈(1) for the identity operation ofregarded as an object of (), and write 𝕀_*[𝕊] for the right Kan extension of the embedding {𝕊} along the inclusion {𝕀}(). The following establishes a connection between the Quillen and Hochschild cohomologies of(see also Remark <ref>). (<ref>) There is a fiber sequence in ((),) of the form _ι𝕀_*[𝕊] ρ(Σ^∞_+(2))^∨ where (Σ^∞_+(2))^∨ refers to the constant functor () with value (Σ^∞_+(2))^∨, i.e. the dual spectrum of the suspension spectrum Σ^∞_+(2). Clearly the above statement holds for the little n-discs operad _n. But more is true about this operad. We let μ_0∈_n(0) denote the unique nullary operation regarded as an object of (_n), and let (μ_0)_![T] : (_n) signify, for some spectrum T, the left Kan extension of the embedding {T} along the inclusion {μ_0}(_n). (<ref>) There is a diagram of Cartesian squares in ((_n),): __n[r][d] (μ_0)_![𝕊[-n+1]] [r][d]𝕀_*[𝕊] [d] 0 [r]𝕊[-n+1] [r]𝕊⊕𝕊[-n+1]where as usual 𝕊 denotes the constant functor (_n) with value 𝕊. In the above diagram, the outer Cartesian square is due to Proposition <ref>, and along with the fact that _n(2) ≃^n-1; while the Cartesian square on the right follows from Proposition <ref>. In particular, the left square is also Cartesian, and thus, it allows us to recover an important result of <cit.> asserting the existence of a fiber sequence in ((_n),): (μ_0)_![𝕊]𝕊__n[n].According to Theorem <ref>, the above is exactly the Quillen principle for the topological operad _n. Moreover, combining this with the key proposition <ref>, we may formulate the Quillen principle for topological _n-algebras as follows. Let A be a cofibrant _n-algebra in _Δ. We let _A(*) denote the free A-module over _n generated by a singleton, and let η_A : _A(*)A be the map in __n^A classified by the unit of A. We have a canonical map Σ^∞_+(η_A) _A^ where Σ^∞_+(η_A) refers to the image of η_A through the functorΣ^∞_+ : (__n^A)_/A_A__n^A.(<ref>)There is a fiber sequence in _A__n^A of the form Σ^∞_+(η_A) _A^_A[n] where we use the same notation for the derived image of _A∈_A__n(_Δ) under the equivalence _A__n(_Δ) ≃_A__n^A. Consequently, for a given fibrant object N ∈_A__n^A, we obtain a fiber sequence connecting the two cohomologies Ω^n^⋆(A;N) ^⋆(A;N) ^_(__n^A)_/A(η_A, Ω^∞_+N). When passing to a stable base category(e.g. chain complexes and symmetric spectra), then we obtain for each A ∈__n() a fiber sequence in __n^A() that is exactly the same as (<ref>) (see Theorem <ref>). Moreover, when working in the framework of Lurie's∞-operads, we may also recover that fiber sequence of Francis-Lurie using the fiber sequence (<ref>). § BACKGROUND AND CONVENTIONSThis section is devoted to some preliminaries relevant to the theory of enrichedoperads and their associated modules, as well as the standard homotopy theories of such objects. We also revisit the tangent category formalism, from which we may obtain the central concepts of the paper including the spectral cotangent complex and Quillen cohomology.Additionally, we present some main results from our previous work concerning some operadic versions of the well known Dold-Kan correspondence. §.§ Enriched operads We will assume that (, ⊗, 1_) is a symmetric monoidal category such that the monoidal product distributes over colimits. By definition, a symmetric C-collection inconsists of a collectionM = {M(c_1,⋯,c_n;c) | c_i,c ∈ C , n⩾0}of objects insuch that for each sequence (c_1,⋯,c_n;c) and each permutation σ∈Σ_n, there is a map of the formσ^* : M(c_1,⋯,c_n;c)M(c_σ(1),⋯,c_σ(n);c) .Such maps establish a right action on M by Σ_n. We let _C() denotethe category of symmetric C-collections in .When C is a singleton then a symmetric C-collection will be called a Σ_*-object. More concretely, a Σ_*-object M consists of a collection {M(n)}_n≥0 such that each M(n) is a right Σ_n-object in . The category of Σ_*-objects inwill be denoted by Σ_*().The category _C() admits a monoidal structure given by the composite product-∘- : _C() ×_C() _C().We will denote by _C the monoidal unit which agrees with ∅_ (i.e. the initial object of ) on all the levels except that _C(c;c)=1_ for every c∈ C. A symmetric C-colored operad inis a monoid in the monoidal category (_C(),-∘-,_C). We will write _C() to denote the category of symmetric C-colored operads in . When C is a singleton, then we will write _*() to denote the corresponding category.In the above definitions, if we forget the action by symmetric groups, then we obtain the notions of nonsymmetric C-collections and nonsymmetric C-colored operads. Given our focus solely on the symmetric context, henceforth, we will exclude the term “symmetric” when referring to any object within _C() (or _C()). An -enriched categorycan be identified with an -enriched operad concentrated in arity 1, i.e. defined by letting (c;d) := _(c,d). On the other hand, every -enriched operad has an underlying category _1 such that __1(c,d) := (c;d). Moreover, these two constructions determine an adjunction *_C()_C()where _C() denotes the category of -enriched categories with C being the fixed set of objects. Consider the functor ι : that is characterized by the requirement that ι preserves coproducts and that ι sends every singleton to 1_. Since ι is a (strongly) symmetric monoidal functor, it sends each operad into an operad in . This allows us to define two typical operads being the associative operadand commutative operadas objects in _*(). More concretely, for each n∈ we have (n) = Σ_n1_ with the symmetric-action being induced by the right regular representation of Σ_n and with the composition induced by the concatenation of linear orders. Whilst the operadis defined by letting (n) = 1_ for every n∈, endowed with the trivial action by the symmetric groups; and such that the composition is induced by the isomorphisms 1_⊗⋯⊗ 1_ 1_. In line with the terminologies, the operad(resp. ) encodes the category of associative monoids (resp. commutative monoids) in . For each map α : CD of sets, there is a changing-colors functor α^* : _D() _C() defined by sending ∈_D() to α^* with α^*(c_1,⋯,c_n;c) := (α(c_1),⋯,α(c_n);α(c)) .Due to this, one can organize all the colored operads in , for arbitrary sets of colors, into a single category () by taking the Grothendieck construction() := ∫_C∈_C().We refer to () as the category of -enriched operads. For more details, an object of () is the choice of a pair (C,) with C∈ and ∈_C(). Moreover, a morphism (C,)(D,) consists of a map α : CD of sets and a map f :α^* in _C().§.§ Modules over an operadIn what follows, we will recall various types of modules over an operad. Letbe a C-colored operad in . We will write (), () and () respectively standing for the categories of left -modules, right -modules and -bimodules. These notions of modules come naturally when consideringas a monoid in the monoidal category of C-collections. The structure of right -modules is linear in the sense that the forgetful functor() _C()preserves all colimits. However, an analogue in general does not hold for left -modules, and thus, does not hold for -bimodules as well. This promotes the development of infinitesimal left modules (bimodules) over an operad (cf. <cit.>), essentially coming as the linearization of the former. We will denote by () (resp. ()) the category of infinitesimal -bimodules (resp. infinitesimal left -modules). We will revisit these two notions in <ref>.It is noteworthy that each of the categories (), (), () and () can be represented as the category of algebras over an enriched operad (or -valued functors on an enriched category). For more details about these, we refer the readers to <cit.>. Moreover, according to <cit.>, there exists a discrete operad, by which we denote O_C, that encodes the category of C-colored operads in . For the case C = {*}, we will write O_* standing for that.The main interest in the study of operads is the notion of algebras over an operad. By definition, a -algebra is simply a left -module concentrated in level 0. More explicitly, a -algebra A is a collection A={A(c)}_c∈ C of objects inequipped, for each (c_1,⋯,c_n;c), with an action map (c_1,⋯,c_n;c)⊗ A(c_1)⊗⋯⊗ A(c_n)A(c).These maps must satisfy the essential axioms of associativity, unitality and equivariance. We denote by _() the category of -algebras. Just like left modules, the structure of algebras overis not linear. In response, the notion of modules over a -algebra A essentially provides the representations for A, as well as the classical notions of bimodules over an associative ring, modules over a group or a Lie algebra, etc. An A-module overis a collectionM={M(c)}_c∈ C of objects inequipped, for each sequence (c_1,⋯,c_n;c), with an action map of the form (c_1,⋯,c_n;c) ⊗ ⊗ _i ∈{1,⋯,n}∖{k}A(c_i) ⊗ M(c_k)M(c) . These maps must satisfy the essential axioms of associativity, unitality and equivariance. The category of A-modules overwill be denoted by _^A.According to <cit.>, the notion above can be reformulated as follows. We write _C() for the category whose objects are the pairs ( , A) with ∈_C(𝒮) and A ∈_(), and such that a morphism ( , A)( , B) consists of a mapin _C(𝒮), and along with a map AB of -algebras. There exists an evident embedding δ : _C(𝒮) _C() sendingto the pair ( , _0) where _0 is the collection of nullary (=0-ary) operations ofregarded as an initial -algebra. The functor δ admits a left adjoint denoted : _C() _C(𝒮), and called the enveloping functor. The enveloping operad associated to apair ( , A) ∈_C() is ( , A) the image of ( , A) through the enveloping functor. Due to [Theorem 1.10, <cit.>], we have a canonical isomorphism _^A≅((,A)_1 , ) between the category of A-modules overand the category of -valued enriched functors on (,A)_1 the underlying category of (,A). As a standard notation, we will write _(A) := (,A)_1 and refer to it as the enveloping category of A. The set of objects of _(A) is C (since (,A) is a C-colored operad). In particular, whenis a single-colored operad then _(A) forms an associative monoid in . In this situation, that object is usually called the universal enveloping algebra of A (cf. e.g., <cit.>). The following is a fundamental result.Suppose thatis a single-colored operad. Then _(A) can be modeled by _A(1_) the free A-module overgenerated by 1_.As discussed above, the category _^A is identified with the category of left modules over _(A). Note that _(A) is the free left module over itself that is generated by 1_. Thus we obtain an identification _(A)≅_A(1_) of A-modules over . Note that when C contains more than one color, then _(A) cannot even be modeled as an object of (_(A), ). Thus the same statement as above no longer holds in the multi-colored setting.§.§ Operadic model structures We assume further thatis a symmetric monoidal model category (cf. <cit.>). Letbe a C-colored operad in . We will usually require the existence of a transferred model (or semi-model) structure on the category _(). This is based on the following result.(Spitzweck <cit.>, Fresse <cit.>) Suppose further that the model structure onis cofibrantly generated, that the monoidal unit 1_ is cofibrant, and thatis Σ-cofibrant. Then the category _() carries a cofibrantly generated semi-model structure transferred from the projective model structure on ^× C. In particular, with the same assumptions on , the transferred semi-model structure on the category _C() does exist. Additionally, with the same assumptions onand , then the categories () and () admit a transferred semi-model structure as well, while () and () inherit a transferred (full) model structure. These observations follow from the above theorem and Remark <ref>. We now recall the homotopy theory of -enriched operads, according to the work of Caviglia <cit.>. For ∈() an -enriched category, the homotopy category of , denoted (), is the ordinary category whose objects are the same as those ofand such that, for x,y∈(), we have _()(x,y) := _()(1_,_(x,y)) .By convention, the homotopy category of an operadis() := (_1). Moreover, a map f : in () is called a levelwise weak equivalence (resp. fibration, trivial fibration, etc) if for every sequence (c_1,⋯,c_n;c) of colors of , the induced map(c_1,⋯,c_n;c) (f(c_1),⋯,f(c_n) ; f(c))is a weak equivalence (resp. fibration, trivial fibration, etc) in .A map f : in () is called a Dwyer-Kan equivalence if it is a levelwise weak equivalence and such that the induced functor(f):()()between homotopy categories is essentially surjective. The Dwyer-Kan model structure on () is the one whose weak equivalences are the Dwyer-Kan equivalences and whose trivial fibrations are the levelwise trivial fibrations surjective on colors.In practice, the Dwyer-Kan model structure usually coincides with the canonical model structure, as also presented in <cit.>. In particular, if this is the case then Dwyer-Kan fibrant objects in () are precisely the levelwise fibrant operads. Here are some base categories that we are concerned with in this paper. Let k be a (unital) commutative ring.(i) Firstly, we let _Δ denote the category of simplicial sets. This category comes equipped with the Cartesian monoidal structure and with the standard (Kan-Quillen) model structure. (ii) Another instance is the category of simplicial k-modules, denoted (k), and equipped with the (degreewise) tensor product over k, and with the standard model structure transferred from that on _Δ. (iii) We will denote by (k) (resp. _⩾0(k)) the category of dg k-modules (resp. connective dg k-modules), endowed with the usual tensor product, and with the projective model structure. When = _Δ or (k), then for every operadthe transferred homotopy theory on _() exists as a (full) model category. Moreover, the two model structures on () exist, and in fact coincide with each other.For the case = (k) or _⩾0(k), due to Theorem <ref> the transferred homotopy theory on _() exists as a semi-model category when providedis Σ-cofibrant. Additionally, it can be shown that the Dwyer-Kan homotopy theory on () exists as a semi-model category as well. (This is in fact made up together by the Dwyer-Kan model structure on () and by the transferred semi-model structure on the categories _C() for C∈). In particular, in this situationis not sufficient in the sense of [<cit.>, 3]. In spite of this, the results needed from the loc.cit remain valid if we bring them in the framework of semi-model categories. §.§ Spectral cotangent complex andQuillen cohomology We recall briefly some fundamental notions relevant to the spectral cotangent complex and Quillen cohomology, according to the works of Harpaz-Nuiten-Prasma <cit.>. Nevertheless, our setting is a bit more flexible, so that we may define tangent categories of (semi) model categories of interest without requiring the left properness. This enables us to cover a wider range of examples. We also refer the readers to [<cit.>, 2.5] for more details.Recall by definition that a (semi) model category M is said to be weakly pointed if it contains a weak zero object 0, i.e. 0 is both homotopy initial and terminal. When M is weakly pointed, we may define the suspension and desuspension of an object X∈(M) respectively as Σ(X) := 0X0 and Ω(X) := 0X×0.Moreover, when passing to the homotopy category, these constructions determine an adjunctionΣ : *(M)(M): Ω. A (semi) model category M is said to be stable if the following equivalent conditions hold: (1) The underlying ∞-category M_∞ is stable in the sense of <cit.>. (2) M is weakly pointed and such that a square in M is homotopy coCartesian if and only if it is homotopy Cartesian. (3) M is weakly pointed and such that the self-adjunction Σ⊣Ω on (M) is an adjoint equivalence.We now recall the procedure of taking stabilizations. Let M be a weakly pointed model category and let X be an (ℕ×ℕ)-diagram in M. A square of X that takes the formX_n,n[r][d] X_n,n+1[d] X_n+1,n[r] X_n+1,n+1will be called a diagonal square.An(ℕ×ℕ)-diagram in M is called (1) a prespectrum if all its off-diagonal entries are weak zero objects in M, (2) an Ω-spectrum if it is a prespectrum and all its diagonal squares are homotopy Cartesian, (3) a suspension spectrum if it is a prespectrum and all its diagonal squares are homotopy coCartesian. A map f : X Y in M^ℕ×ℕ is called a stable equivalence if for every Ω-spectrum Z the induced map ^_M^ℕ×ℕ_(Y,Z) ^_M^ℕ×ℕ_(X,Z) is a weak equivalence, where M^ℕ×ℕ_ signifies the projective model category of (ℕ×ℕ)-diagrams in M. We will assume further that M is a weakly pointed combinatorial model category such that the domains of generating cofibrations are cofibrant. The stabilization of M, denoted (M), is the left Bousfield localization of M^ℕ×ℕ_ with Ω-spectra as the local objects. More precisely, (M) is a cofibrantly generated semi-model category such that ∙ weak equivalences are the stable equivalences, and ∙ (generating) cofibrations are the same as those of M^ℕ×ℕ_. In particular, fibrant objects of (M) are precisely the levelwise fibrant Ω-spectra.The above definition is based on the main result of <cit.>, and together with the fact that the Ω-spectra in M can be characterized as local objects against a certain set of maps (cf. <cit.>, Lemma 2.1.6). Moreover, it suffices to require M to be a semi-model category with the same properties as above.By construction, there is a Quillen adjunction Σ^∞ : *M(M) : Ω^∞ where Ω^∞(X)=X_0,0 and Σ^∞(X) is the constant diagram with value X. For an object A∈(M), the tangent category to M at A is 𝒯_AM:=(M_A//A)the stabilization of M_A//A, i.e. the model category of objects over and under A. There is a Quillen adjunctionΣ^∞_+M_/A𝒯_AMΩ^∞_+ defined as the composed adjunction*A⊔ (-)M_/AM_A//A*Σ^∞𝒯_AMΩ^∞ .For more details, the left adjoint Σ^∞_+ sends each B ∈M_/A to Σ^∞_+(B) = Σ^∞(A⟶ A⊔ B ⟶ A), while the right adjoint Ω^∞_+ sends X ∈𝒯_AM to Ω^∞_+(X) = (X_0,0 A).Here is the central concept of the paper.For an object A∈(M), the cotangent complex of A is _A:=𝕃Σ^∞_+(A) ∈𝒯_AM,i.e. the derived suspension spectrum of A. Accordingly, the cotangent complex of A can be modeled by the constant spectrum Σ^∞(A⊔ A^) where A^ signifies a cofibrant resolution of A∈M. Furthermore, _A = Σ^∞(A⊔ A^) admits a suspension spectrum replacement determined by fixing A⊔ A^ as its value at the bidegree (0,0). In particular, the value at the bidegree (n,n) is given by Σ^n(A⊔ A^) the n-suspension of A⊔ A^∈M_A//A (see [<cit.>, Corollary 2.3.3]). In practice, it is usually convenient to exhibit that suspension spectrum as a model for _A. For a map f : AB in M, the relative cotangent complex of f is _B/A:=[ 𝕃Σ^∞_+(f) _B ] the homotopy cofiber of the map 𝕃Σ^∞_+(f) _B in 𝒯_BM. When A is the initial object then _B/A is weakly equivalent to _B. Thus, on the one hand, the relative cotangent complex can be viewed as a natural extension of the standard notion of cotangent complex. On other hand, for any map f : AB, we can identify _B/A with the (absolute) cotangent complex of f considered as an object of M_A/ (see [<cit.>, Proposition 2.2.10]). Let X be a fibrant object of M. The (spectral) Quillen cohomology of X with coefficients in a given object M∈𝒯_XM is the space ^⋆(X ; M) :=^_𝒯_XM (_X , M).Moreover, for each n∈ℤ, the n'th Quillen cohomology group of X with coefficients in M is given by^n(X; M) := π_0 ^_𝒯_XM (_X , M[n])where M[n] : = Σ^nM the n-suspension of M in 𝒯_XM.We will regard ^⋆(X ; M) as a pointed space with the base point being the zero map _X 0 M. Note also that this base point represents the zero element of the group ^0(X; M).By adjunction, we have a weak equivalence ^_𝒯_XM (_X , M) ≃^_M_/X (X, Ω^∞_+(M)).This reveals how the spectral Quillen cohomology can be put into practical use. Moreover, it implies that Quillen cohomology is invariant under weak equivalences between fibrant objects. Nevertheless, note also that when M is in addition right proper then the model category M_/X has already the right homotopy type, and therefore, in this situation the requirement for fibrancy on X is no longer needed.§.§ Operadic Dold-Kan correspondence We recall from <cit.> some results concerning a generalization of the Dold-Kan correspondence into the operadic context. As the standard notations, we writeΓ : *≃_⩾0(k)_k : for theDold-Kan correspondence, where k is a commutative ring andrefers to the normalization functor. An important fact is that the functors Γ andare no longer inverses of each other (or even adjunction) when descending to the categories of monoids. Thus, the Dold-Kan correspondence certainly fails when extending to the framework of algebras over an operad. In spite of this, the existence of a Dold-Kan correspondence for operadic algebras can still be realized from a homotopy-theoretic perspective. Concretely, the result is stated as follows. Letbe a Σ-cofibrant operad in _k. Applying the normalization functor tolevelwise produces an operad in _⩾0(k), denoted . (<cit.>, <cit.>) The functor _ : _(_k) _(_⩾0(k)) given by applying the normalization functor levelwise is the right adjoint of a Quillen equivalence between semi-model categories.Several interesting corollaries can be derived from the above theorem, as discussed in what follows. Firstly, for each set C we may show that the functor _C : _C(_k) _C(_⩾0(k)) (again induced by the normalization functor) is a right Quillen equivalence. Let us denote by Ł_C the left adjoint to _C. The collection of adjunctions {Ł_C ⊣_C| C ∈} can then be organized into a single adjunctionŁ : *(_⩾0(k))(_k) : .We consider these two categories as a relative category with weak equivalences being precisely the Dwyer-Kan equivalences. Another important consequence is as follows. [<cit.>, 3.2] The functorabove induces an equivalence(_k)_∞(_⩾0(k))_∞between the underlying ∞-categories with the inverse being induced by the functor Ł. Nevertheless, the corollary of interest in this paper is as follows. [<cit.>, 3.1] Letbe a levelwise cofibrant operad in _k. Then there is a Quillen equivalenceŁ^_ : *≃()() :^_in which the right adjoint is as usual defined by applying the normalization functor levelwise. §.§ More conventions and notations Letbe a suitable symmetric monoidal model category,a C-colored operad in , and let A be a -algebra. To proceed, we will use the following notations and conventions.(i) We will say that the pair (, ) is sufficient (resp. stably sufficient) if the base categoryis sufficient (resp. stably sufficient) in the sense of [<cit.>, 3], and in addition,is Σ-cofibrant.(ii) Moreover, the pair (, ) is said to be abundant (resp. stably abundant) if the base categoryis abundant (resp. stably abundant) in the sense of [<cit.>, 3] and such thatis cofibrant, and is good in the sense of [<cit.>, 5.1].(iii) We will say that the triple (,, A) is sufficient (resp. stably sufficient) if the pair (, ) is sufficient (resp. stably sufficient) in the above sense, and in addition, A∈_() is cofibrant. (iv) Furthermore, (,, A) is abundant (resp. stably abundant) if the pair (, ) is abudant(resp. stably abundant) and A∈_() is cofibrant.The hypotheses presented above allow us to inherit some main results of <cit.> (see <ref> for more details). Therefore, readers may safely disregard the details of these hypotheses. (v)To avoid confusion, we will write ^ (resp. ^, ^ and ^iℓ) to denotein its role as an object of () (resp. (), () and ()). We also write A^ standing for A in its role as an object of _^A.(vi) We will refer to different cotangent complexes ofusing the following notations: ∙ _∈_(): the cotangent complex ofconsidered as an object of (). ∙ ^_∈__C(): the cotangent complex ofconsidered as an object of _C().∙ _^∈_^(): the cotangent complex ofwhen regarded as a bimodule over itself.∙ _^∈_^(): the cotangent complex ofas an infinitesimal bimodule over itself.(vii) To denote different cotangent complexes of A, we will use the following:∙ _A ∈_A_(): the cotangent complex of A considered as an algebra over , and∙ _A^∈_A^_^A: the cotangent complex of A considered as an object of _^A.(viii) We will write^⋆(; -) and^⋆_(; -)respectively standing for Quillen cohomology ofwhen considered as an object of () and _C(). Besides that, we will write ^⋆(A; -) to denote Quillen cohomology of A as a -algebra.(ix) Moreover, we will refer to ^⋆(; -) as the (proper) Quillen cohomology of , and refer to ^⋆_(; -) as the reduced Quillen cohomology of . By notation the former is classified by_∈_(), and while the latter is classified by ^_∈__C().(x) We will write_^ : _C() () (resp._^iℓ : _C() ())standing for the functor that assigns to each C-collection M the free infinitesimal -bimodule (resp. free infinitesimal left -module) generated by M.(xi) We will write_A : ^× C_A^to denote the functor that assigns to each object X = {X(c)}_c∈ C∈^× C the free A-module overgenerated by X.(xii) We will denote by ℰ_C the C-collection that is ∅_ on all levels except that ℰ_C(c) = 1_ for every c∈ C. When C is a singleton, then we will signify that using the notation ℰ_*.(xiii) Finally, for a (semi) model category M containing two objects X and Y, we will sometimes abbreviate the derived mapping space ^_M(X,Y) simply by ^(X,Y), as long as the category M is understood. § OPERADIC TANGENT CATEGORIES AND COTANGENT COMPLEX OF ENRICHED OPERADSWe recall from<cit.> some fundamental results concerning various operadic tangent categories and cotangent complex of operads. Letbe a symmetric monoidal model category,a C-colored operad in , and let A be a -algebra.(i) First, assume that the triple (,,A) is sufficient. The following theorem plays a key role in the investigation of operadic tangent categories.[Harpaz-Nuiten-Prasma <cit.>] There is a Quillen equivalence *≃_A^_^A_A_() induced by the adjunction *(_^A)_A^/(_())_A/ of induction-restriction functors. Moreover, when (,,A) is stably sufficient then there is a chain of Quillen equivalences *≃_^A*≃_A^_^A_A_()where the first adjunction is given by the composed Quillen equivalence*(-) ⊔ A^_^A(_^A)_A^//A^*Σ^∞_A^_^AΩ^∞(see <ref> for notations).(ii) A version of the above theorem for various tangent categories atis as follows.<cit.> Assume that the pair (, ) is sufficient. There is a chain of Quillen equivalences*≃_^()_^()*≃__C(𝒮)*≃_(𝒮)that is induced by the adjunctions of induction-restriction functors*()_^/()_^/*_C(𝒮)_/*(𝒮)_/.Furthermore, when (, ) is stably sufficient, we have a prolonged chain of Quillen equivalences:*≃()*≃_^()_^()*≃__C(𝒮)*≃_(𝒮)where the first adjunction is given by the composed Quillen equivalence*(-) ⊔^()()_^//^*Σ^∞_^()Ω^∞(see <ref> for notations). (iii) In what follows, we shall recall from <cit.> a description of the cotangent complex of . The key proposition is as follows.[<cit.>, 5.2] Suppose that the pair (, ) is abundant. Under the right Quillen equivalence_(𝒮) _^(), the cotangent complex _∈_() is identified with _^[-1] ∈_^().Next we recall how the object _^ can be well calculated. Assume thatis Σ-cofibrant and thatis combinatorial. In this situation, for any cofibrant resolution (^)^^ of ^∈(), the induced map (^)^⊔^^⊔^is a weak equivalence. Thus we may exhibit the constant spectrum Σ^∞(^⊔^) ∈_^() as a model for _^. For more convenience, we find a suspension spectrum replacement for _^. To this end, we just need to compute Σ^n(^⊔^) for each n≥0, i.e. the n-suspensionof ^⊔^∈()_^//^ (see Remark <ref>).For each n ⩾ 0, we denote by ^n:=Σ^n(1_𝒮⊔ 1_𝒮)∈ with the suspension Σ(-) taken in 𝒮_1_𝒮//1_𝒮, and refer to ^n as the pointed n-sphere in . Moreover, we will write _C^n to denote the C-collection that is ∅_ on all levels except that _C^n(c;c)=^n for every c∈ C.We have that _C^0≅_C ⊔_C, and for every n≥0, we have _C^n≃Σ^n(_C ⊔_C) where the suspensionΣ(-) is taken in _C()__C//_C.In [<cit.>, 5.3], we showed that ∘_C^0 is a model for the coproduct ^⊔^∈(), and furthermore, for every n≥0 we haveΣ^n(^⊔^) ≃∘_C^n.We have on each level that(∘_C^n)(c_1,⋯,c_m;c) = (c_1,⋯,c_m;c)⊗ (^n)^⊗ m.Let us denote by ∘_C^∙∈_() the suspension spectrum with (∘_C^∙)_n,n = ∘_C^n. (This agrees with the object _ in the loc.cit). Thus we obtain the following.There is a weak equivalence Σ^∞(^⊔^) ≃∘_C^∙ in _(), which exhibits ∘_C^∙ as a suspension spectrum model for _^.Furthermore, we obtain the following. Assume that (, ) is sufficient. Then under the right Quillen equivalence_^() _^(), the object ∘_C^∙ is identified to itself ∘_C^∙ considered as a prespectrum of infinitesimal -bimodules. Hence the latter is a model for the derived image of _^ in _^().Assume further that (, ) is stably sufficient, so that we obtain the chain of Quillen equivalences (<ref>). Then under the composed right Quillen equivalence∘ Ω^∞ : _^() (), the prespectrum ∘_C^∙ is identified with _∈() which is given on each level by _(c_1,⋯,c_m;c) ≃(c_1,⋯,c_m;c)⊗_nΩ^n [(^n)^⊗ m×_1_𝒮^0 ] where the desuspension Ω(-) is taken in . Thus in this situation, _ is a model for the derived image of _^ in (). In summary, the cotangent complex _∈_() is described as follows. Suppose that the pair (, ) is abundant. Then under the right Quillen equivalence _() _^(), the cotangent complex _ is identified with (∘_C^∙)[-1]∈_^(). Moreover, when (, ) is stably abundant, we obtain that under the right Quillen equivalence _() (), _∈_() is identified with _[-1]∈().An immediate consequence of this theorem is as follows. After sending coefficients into _^(), we have a weak equivalence ^⋆(;) ≃^(∘_C^∙,[1])for every ∈_^(). When (, ) is in addition stably abundant then we have^⋆(;M) ≃^(_,M[1])for every M∈().(iv) Various cotangent complexes ofare related via a cofiber sequence as follows. We willwrite θ_: ^_(_C) for the map in () classified by the unit map _C.[<cit.>, 5.3] Suppose that the pair (, ) is abundant. There is a cofiber sequence in _^() of the form ^_Σ^∞_+(θ_) ∘_C^∙where we use the same notation to denote the derived image of ^_ under the Quillen equivalence __C(𝒮)≃_^(), and the middle term is the image of θ_ through the functorΣ^∞_+ : ()_/^_^().§ MORE ABOUT INFINITESIMAL BIMODULES OVER AN OPERAD As before, we letbe a fixed C-colored operad in . Recall by definition that an infinitesimal left -module is a C-collection M endowed with a map ∘_(1)MM of C-collections where “∘_(1)" signifies the infinitesimal composite product- ∘_(1) - : _C() ×_C() _C()(see <cit.>, 6.1). The action map must satisfy the usual axioms of the associativity and unitality. We let () denote the category of infinitesimal left -modules.An infinitesimal -bimodule is a C-collection M equipped with an infinitesimal left -module structure, exhibited by a map 𝒫∘_(1)MM; and with a right -module structure, exhibited by a map M ∘ M. Moreover, these two are subject to the usual axiom of the compatibility. (See [<cit.>, 2.2] for more details).The data of an infinitesimal left -module structure ∘_(1)MM consist of Σ_*-equivariant maps of the form (c_1,⋯,c_n;c) ⊗ M(d_1,⋯,d_m;c_i)M(c_1,⋯,c_i-1,d_1,⋯,d_m,c_i+1,⋯,c_n;c). Whilst the data of a right -module structure M ∘ M consist of Σ_*-equivariant maps of the form M(c_1,⋯,c_n;c) ⊗(d_1,1,⋯,d_1,k_1;c_1) ⊗⋯⊗(d_n,1,⋯,d_n,k_n;c_n)M(d_1,1,⋯,d_1,k_1,⋯,d_n,1,⋯,d_n,k_n;c). As discussed in [<cit.>, 2.2], we have for each C-collection M that_^(M) ≅𝒫∘_(1)( M ∘𝒫)and _^iℓ(M) ≅𝒫∘_(1)M.Taking M = ℰ_C, we have _^iℓ(ℰ_C) ≅𝒫∘_(1)ℰ_C ≅𝒫∘_(1)( ℰ_C ∘𝒫) ≅_^(ℰ_C)(see <ref> for notations). For the case C ={*}, note that 𝒫∘_(1)ℰ_* agrees with the “shifted object” [1] of [<cit.>, 17.3], which is given on each level by [1](n)= 𝒫(n+1). Thus we may write_^iℓ(ℰ_*) ≅[1] ≅_^(ℰ_*).A basic fact is thatis free (generated by _C) when regarded as a right module over itself. Nonetheless,is in general not a free object when living in () (or ()). A noteworthy exception is the case of the commutative operad . Indeed, according to the above remark, we obtain that the operadis free (generated by ℰ_*) when considered as an object of () (or ()).In what follows, we shall recall from [<cit.>, 2.2] how infinitesimal -bimodules can be represented as -valued enriched functors. We will need the following notations and conventions. ∙ We will writeto denote the smallest skeleton of the category of finite sets whose objects are 0:=∅ and m:={1,⋯,m} for m⩾1.∙ We write _* to denote the smallest skeleton of the category of finite pointed sets. The objects of _* will be written as ⟨ m ⟩ := {0,1,⋯,m}with 0 as the basepoint, for every m⩾0.∙ A morphism f: ł m ł n $̊ in_*is called an inert map iff^-1(i)is a singleton for everyi∈{1,⋯,n}, and an active map iff^-1(0) = {0}.∙We will write_*^(resp._*^)for the subcategory of_*consisting of all the inert maps (resp. active maps). There is an obvious embedding _* that sends each m to ł m $̊. Moreover, we may show that the essential image of this functor coincides with_*^. It is known that the pair (_*^, _*^) form a factorization system on _*. That is to say, both _*^ and _*^ are a wide subcategory and such that every morphism ł m ł n $̊in_*can be factored as a composed mapł m fł p gł n $̊ with f (resp. g) being inert (resp. active); and moreover, this factorization is unique up to unique isomorphism. In an opposite convention, we have that ((_*^)^, (_*^)^) form a factorization system on (_*)^.We construct three categories in , denoted Ib^𝒫, R^𝒫 andIℓ^𝒫, as follows. Firstly, the three categories have the same objects given by C-sequences {(c_1,⋯,c_n;c) | c_i,c ∈ C , n⩾0}. ∙ The mapping spaces of Ib^𝒫 are defined as follows. For each map f : ł m ł n $̊ in_*, we define^f_Ib^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;d)) := 𝒫 (c,{d_j}_j∈ f^-1(0);d) ⊗⊗_i=1,⋯,n𝒫 ({d_j}_j∈ f^-1(i);c_i).Then we define_Ib^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;d)) := _⟨ m ⟩f ⟨ n ⟩^f_Ib^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;d) )where the coproduct ranges over__*(ł m ,̊ł n )̊. The unit morphisms ofIb^𝒫are defined via the unit operations of𝒫, and moreover, the categorical structure maps are induced by the composition in𝒫.∙The mapping spaces ofR^𝒫are concentrated in objects of the form_R^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;c)) := _⟨ m ⟩activef⟨ n ⟩ [⊗_i=1,⋯,n𝒫 ({d_j}_j∈ f^-1(i);c_i) ]where the coproduct ranges over those mapsf∈__*^(ł m ,̊ł n )̊. Observe that there is a map_R^𝒫( (c_1,⋯,c_n;c) , (d_1,⋯,d_m;c) ) _Ib^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;c))induced by the embedding_*^_*and by inserting the unit operation𝕀_cinto the factor(c;c)of the right hand side. The categorical structure ofR^𝒫is then defined via the operad structure of, so thatR^𝒫forms a subcategory ofIb^𝒫.∙The mapping spaces ofIℓ^𝒫are given by_Iℓ^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;d)) := _f𝒫 (c,{d_j}_j∈ f^-1(0);d)where the coproduct ranges over the subset of__*^(ł m ,̊ł n )̊consisting of those mapsfwith the property that the two colorsc_iandd_f^-1(i)coincide for everyi∈{1,⋯,n}. As above, there is an embeddingIℓ^𝒫Ib^𝒫induced by the canonical embedding_*^_*and by inserting the identity operation𝕀_c_iinto the factor(c_i;c_i)fori=1,⋯,n. The following proposition expresses our main interest in the above constructions.There are natural isomorphisms of categories (i)(Ib^𝒫, ) ≅(), (ii)(R^𝒫, ) ≅(), and (iii)(Iℓ^𝒫, ) ≅().The first two isomorphisms are included in [<cit.>, 2.2]. The other can be proved in the same manner. We provide additional examples and remarks regarding Constructions <ref>.It is clear that Ib^ is isomorphic to _*^, and while Iℓ^ (resp. R^) is isomorphic to (_*^)^ (resp. (_*^)^). Here we made use of the functor ι : (see Remark <ref>) to associate to each ordinary category the -enriched version of that category.Letbe such that (c) = ∅_ for every color c. Then by construction, we have that ^f_Ib^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;d)) = ∅_ whenever f is not a surjection. Thus, the mapping space _Ib^𝒫((c_1,⋯,c_n;c) , (d_1,⋯,d_m;d)) consists of only the summands that correspond to a surjective map ł m ł n $̊. The same observation holds forIℓ^andR^. Let us now denote bythe nonunital commutative operad which coincides withexcept that(0)=∅_. Accordingly, we can see thatIb^≅ (_*^)^where_*^denotes the subcategory of_*consisting of surjective maps. We have also thatIℓ^≅ (_*^, )^andR^≅ (_*^, )^in which_*^,:= _*^∩_*^and_*^,:= _*^∩_*^. Consider the case where =. By construction, the category Ib^ has the same objects as _*. Moreover, the data of a morphism in _Ib^(ł n ,̊ł m )̊ consist of a map f : ł m ł n $̊ in_*, and in addition, a tuple of permutations(σ_0,σ_1,⋯,σ_n) ∈Σ_k_0×Σ_k_1×⋯×Σ_k_nwithk_ibeing the cardinality off^-1(i). We can see thatIb^is isomorphic toΓ()^the opposite category of the categoryΓ()of <cit.>. Moreover, the significant functor : Γ()^considered in the loc.cit is nothing but a model for^∈().From a set-theoretic perspective, a morphism in Ib^𝒫 can be represented as a tuple of operations of 𝒫 of the form (μ ; ν_1,⋯,ν_n). In this way, the image of the embedding R^𝒫Ib^𝒫 consists of those morphisms of the form (𝕀_c ; ν_1,⋯,ν_n); whilst the image of the embedding Iℓ^𝒫Ib^𝒫 consists of those morphisms of the form (μ ; 𝕀_c_1,⋯,𝕀_c_n). We have seen from Example <ref> that the triple (Ib^, R^, Iℓ^) agrees with the triple (_*^, (_*^)^, (_*^)^). An immediate consequence is that the pair (R^, Iℓ^) form a factorization system on Ib^ (see Remark <ref>). In fact, this phenomenon can be generalized to any operad that comes from an operad in, as demonstrated in the statement below.Suppose thatcomes from an operad in . Then the pair (R^, Iℓ^) form a factorization system on Ib^.Clearly R^ and Iℓ^ are a wide subcategory of Ib^. Let f be a morphism in Ib^ that lies above a map f in _*. As discussed in Remark <ref>, f admits a factorization f = g∘ h with h∈_*^ and g ∈_*^, and this factorization is unique (up to unique isomorphism). For each such a factorization, there exists a pair (g, h) ∈R^×Iℓ^ lying above the pair (g, h) and such that h∘g = f. Moreover, the choice of (g, h) is (absolutely) unique, as described in Remark <ref>.The category Ib^ is the same as the category “𝒰()" of <cit.>, and moreover, the above factorization system coincides with that of Proposition 2.42 of the loc.cit. As in the latter, we may obtain further a factorization system on the twisted arrow category () (see <ref>) by taking the (covariant) unstraightening of the functors ^ : R^ and ^iℓ : Iℓ^. The situation would become more interesting when passing to the framework of simplicial (or topological) operads, at which () is considered as an ∞-category.§ HOCHSCHILD ANDQUILLEN COHOMOLOGIES OF OPERADIC ALGEBRAS In this section, we will assume that the base categoryis sufficient in the sense of [<cit.>,3], and we letbe a fixedC-colored operad in. We shall first introduce spectral Hochschild cohomology of enriched operads and operadic algebras, with the aim of generalizing the classical Hochschild cohomology to a broader framework. Then we will give a discussion on various endomorphism constructions, which describe the natural passage from operadic bimodules to operadic algebras, as well as the natural passage fromoperadic infinitesimal bimodules to modules over operadic algebras.The endomorphism constructions provide a machinery for how various cohomologies of-algebras can be governed by the corresponding cohomologies of the operaditself, as will be discussed in the last subsection.§.§ Spectral Hochschild cohomology Let us start with the notion of spectral Hochschild cohomology of operads. The spectral Hochschild cohomology ofwith coefficients in a given object M∈_^() is the space ^⋆(, M) := ^(_^, M). For each n∈, the n'th spectral Hochschild cohomology group ofwith coefficients in M is^n(, M) := π_0 ^ (_^ , M[n]). By definition, the spectral Hochschild cohomology ofcoincides with Quillen cohomology ofconsidered as an infinitesimal bimodule over itself. To avoid confusion, we will refer to _^ as the Hochschild complex of. Now letA∈_()be an algebra over. The corresponding cohomology ofAis defined as follows.The spectral Hochschild cohomology of A with coefficients in a given object N∈_A^_^A is the space ^⋆(A, N) := ^(_A^, N). For each n∈, the n'th spectral Hochschild cohomology group of A with coefficients in N is given by ^n(A, N) := π_0 ^(_A^, N[n]). Accordingly, the spectral Hochschild cohomology of A agrees with Quillen cohomology of A regarded as an object of _^A. As above, we will also refer to _A^ as the Hochschild complex ofA.From now on, we will usually refer to spectral Hochschild cohomology simply as Hochschild cohomology, except for cases of inconsistency.Recall from Remark <ref> that there exists an operad O_C such that the category of O_C-algebras is isomorphic to the category of C-colored operads in . Thus we may consideras an object of _O_C(). It can be proved further that there is a categorical isomorphism _O_C^≅()between -modules over O_C and infinitesimal -bimodules, which identifies ^∈() withitself as an object of _O_C^. Accordingly, we obtain that Hochschild cohomology of the operadagrees with Hochschild cohomology of ∈_O_C(). (On other hand, we can see that Quillen cohomology of ∈_O_C() coincides with the reduced Quillen cohomology of the operad ).By abuse of notation, we will write ⊗𝕊∈_^() for the suspension spectrum with value on each level (c_1,⋯,c_m;c) and at the bidegree (n,n) being given by (⊗𝕊)_n,n(c_1,⋯,c_m;c) := (c_1,⋯,c_m;c)⊗^n. In the same fashion, we let A⊗𝕊∈_A^_^A denote the suspension spectrum with(A⊗𝕊)_n,n(c) := A(c)⊗^n for every n∈ℕ and c∈ C.Suppose thatis levelwise cofibrant. Then ⊗ 𝕊 is a suspension spectrum model for the Hochschild complex _^. Likewise, when A is levelwise cofibrant then A⊗𝕊 is a suspension spectrum model for _A^. Whenis levelwise cofibrant then the coproduct ^⊔^∈() has already the right homotopy type. Thus we may exhibit the constant spectrum Σ^∞(^⊔^) as a model for _^∈_^(). Moreover, the n-suspension Σ^n(^⊔^)∈()_^//^ is given on each level by Σ^n(^⊔^)(c_1,⋯,c_m;c) ≃(c_1,⋯,c_m;c)⊗^n (see Notation <ref>). This follows from the fact that colimits of infinitesimal -bimodules can be taken levelwise. We just showed that ⊗𝕊 is a suspension spectrum model for _^ (see also Remark <ref>). The second claim is very similar.Suppose that the pair (, ) is sufficient, so that we obtain Quillen equivalences*≃_^()_^()*≃__C(𝒮)*≃_(𝒮)(see <ref>). In particular, this allows us to calculate both Quillen and Hochschild cohomologies ofwithin the same category _^(), and furthermore, the same category () when (, ) is stably sufficient. Similarly, when the triple (,,A) is sufficient, then due to the Quillen equivalence *≃_A^_^A_A_(), we may also consider both Quillen and Hochschild cohomologies of A in the same framework of _A^_^A. Moreover, when (,,A) is stably sufficient then the two cohomologies can be brought into the same category _^A.In any base category , Hochschild cohomology of the operadis not interesting at all. This is because of the fact that ^∈() is a free object generated by ℰ_* (see Example <ref>). An analogous observation holds for every commutative algebra A, due to the fact that A^∈_^A coincides with the free object generated by 1_.Assume that the pair (,) is stably sufficient. Then under the right Quillen equivalence ∘ Ω^∞ : _^() (), the object _^ is simply identified with ^∈() (cf. [<cit.>, Corollary 2.2.4]). Therefore, after sending coefficients into (), the Hochschild cohomology ofwith coefficients in some M∈() is given by ^⋆( ; M) ≃^_()(^, M). In particular, for the case in which = (k) andis a dg operad concentrated in arity 1 (i.e.is identified with a dg category), then we recover the usual Hochschild cohomology of a dg category, that has been widely considered in literature.In the same fashion, when the triple (,, A) is stably sufficient, then we obtain that the right Quillen equivalence ∘ Ω^∞ : _A^_^A _^A identifies _A^∈_A^_^A to A∈_^A. Thus in this situation, the Hochschild cohomology of A with coefficients in a given object N∈_^A is computed by ^⋆(A ; N) = ^__^A(A^, N). Consider the case where = (k) (see Examples <ref>)and =. Then we have that _^A agrees with the category of A-bimodules, and thus, in this situation we recover the classical Hochschild cohomology of dg associative algebras.The following example will be clarified in<ref>. In the case where = _Δ andis a simplicial operad which is fibrant, we have an equivalence of ∞-categories _^()_∞≃((), ) in which () is the twisted arrow ∞-category ofanddenotes the ∞-category of spectra. Moreover, it can be shown that under this equivalence, _^∈_^() is identified with the constant functor on the sphere spectrum 𝕊∈. Thus after sending coefficients into ((), ), the Hochschild cohomology ofwith coefficients in a given functor :() is simply classified bylim. In particular, whenis concentrated in arity 1 (i.e.is identified with a simplicial category or an ∞-category), then we obtain the usual cohomology of ∞-categories which generalizes the Baues-Wirsching cohomology of small categories (cf. <cit.>).§.§ Endomorphism constructions It is known that the composite product - ∘ - : _C() ×_C() _C()is left linear, i.e. for everyA ∈_C()the functor (-) ∘ A : _C() _C()preserves all colimits. Consider the case whereA ∈^× Cregarded as aC-collection concentrated in level0. Taking composite product withAdetermines a functor(-) ∘ A : _C() ^× C.Of course, this functor admits a right adjoint, as described in the following construction. Let A={A(c)}_c∈ C and B={B(c)}_c∈ C be two objects of ^× C. The endomorphism object associated to thepair (A,B) is a C-collection denoted by _A,B, and defined by letting _A,B(c_1,⋯,c_n;c) := _(A(c_1)⊗⋯⊗ A(c_n),B(c)). The right Σ_n-action is naturally given by the permutation on the factors A(c_i)'s. In particular, the endomorphism operad associated to A is given by _A:=_A,A. (See also <cit.>). We have indeed the following result, which can be readily verified using definition. The functors (-) ∘ A and _A,- determine an adjunction (-) ∘ A : *_C()^× C : _A,-. One may obtain relative versions of (<ref>) when endowing the involving objects with additional structures, as discussed in what follows.Let us assume further thatAcomes equipped with a-algebra structure classified by a mapℓ_A : _Aof operads. The relative composite product determines a functor(-)∘_ A : ()^× C.On other hand, for anyB ∈^× C, one observes that_A,Bcarries a canonical right module structure over_A.Thus,_A,Bcarries a right-module structure given by the restriction alongℓ_A. Furthermore, whenBcomes equipped with the structure of a-algebra then_A,Bcarries a canonical-bimodule structure. The relative versions of (<ref>) are as follows.<cit.> The constructions (-)∘_ A and_A,- determine the adjunctions: (-)∘_ A : *()^× C : _A,-, and(-)∘_ A : *()_() : _A,- Another important adjunction, which interposes the two above, is given as follows. The constructions (-)∘_ A and_A,- determine an adjunction: (-)∘_ A : *()_^A : _A,-Let N∈_^A and M∈() be given. In order to see how _A,N comes equipped with a canonical infinitesimal -bimodule structure, we may use Proposition <ref>, so that it suffices to represent _A,N as a functor Ib^𝒫. On other hand, note that the infinitesimal left -module structure on M can be equivalently encoded by a map of the form ∘ (;M)M where the left hand side is the sub-collection of ∘ (⊔ M) that is linear in “M” (see <cit.>, 6.1 and 12.6.14). The above map induces in an obvious way a map of the form∘ (A;M∘_ A) ≅∘ (∘_ A;M∘_ A)M∘_ A. This latter map encodes the induced A-module structure on M∘_ A (see also 12.3.1 of the loc.cit). Finally, to verify that the obtained functors indeed form an adjoint pair, we may make use of the adjunction (<ref>); so that it will suffice to argue that the unit (resp. counit) of (<ref>) forms a map of infinitesimal -bimodules (resp. A-modules) when descending to () (resp. _^A).We describe an alternative approach for how the adjunctions (<ref>) and (<ref>) can be established. For simplicity, assume thatis a single-colored operad. Recall from [<cit.>, 6.2] that the category () comes equipped with a symmetric monoidal structure whose monoidal product is given by the usual tensor product of underlying Σ_*-objects. The monoidal unit is nothing but ℰ_*. Observe then that the embedding δ :(), which takes each X∈ toX itself concentrated in level 0, is symmetric monoidal. Let us now denote by ∈_*(()) the image ofthrough the induced functor δ : _*() _*(()). According to the loc.cit, there is an identification () ≅_(()). In particular,^∈() itself is a -algebra. Pick up now the adjunction (<ref>):(-)∘_ A : *() : _A,-. According to Lemma 6.2.1 of the loc.cit, the left adjoint above is symmetric monoidal. The induced functor (-)∘_ A : _*(()) _*() sendsto nothing but . So we obtain an induced adjunction(-)∘_ A : *_(())_() : _A,-, which coincides with (<ref>). Furthermore, since the left adjoint above sends ^∈_(()) to A∈_(), we obtain another adjunction(-)∘_ A : *_^^_^A : _A,-. This adjunction agrees with (<ref>), after applying the obvious identification _^^≅() between ^-modules overand infinitesimal -bimodules.A noteworthy consequence is as follows. For an object X∈^× C⊆_C(), there is an isomorphism _A(X) ≅ (𝒫∘_(1)X) ∘_ A of A-modules over . In particular, when C = {*} we obtain that _A(1_) ≅𝒫[1]∘_ A (see <ref> for notations).Consider the following square of left adjoints _C() [r]^(-) ∘ A[d]__^ ^× C[d]^_A () [r]_(-) ∘_ A _^A which is commutative due to that property of the associated square of right adjoints. Combined with Remark <ref>, this yields for each X∈^× C⊆_C() a chain of isomorphisms in _^A: _A(X) ≅_A(X ∘ A) ≅_^(X)∘_ A ≅ (𝒫∘_(1)X) ∘_ A. For the case C={*}, we obtain that _A(1_) ≅ (𝒫∘_(1)ℰ_*)∘_ A ≅𝒫[1]∘_ A.The latter identification can also be found in [<cit.>, 4.3]. Due to this, one may obtain a description of _A(1_), which models the universal enveloping algebra of A (see Proposition <ref>), as in the loc.cit.An interesting example concerns the two operadsandenriched over = (k) (see Examples <ref>). The canonical embeddingendowswith an infinitesimal -bimodule structure. Moreover, it can be shown that the map _^(ℰ_*) = [1]≅ classified by the unique nullary operation ofis an isomorphism in (). Combined with Corollary <ref>, this yields for each -algebraa chain of natural isomorphisms in ^_: ∘_≅_^(ℰ_*)∘_≅_(k) ≅_(). This allows us to recover a fundamental result asserting that the universal enveloping algebra ofcan be naturally modeled by ∘_, i.e. the induction ofalong the map . To be able to do homotopy theory, we will need the following assertion. The adjunctions (<ref>), (<ref>) and (<ref>) are all Quillen adjunctions when provided that A is levelwise cofibrant. Clearly with the assumption that A is levelwise cofibrant, the construction _A,- preserves fibrations and trivial fibrations, due to basic properties of a symmetric monoidal model category.By the identification ∘_ A ≅ A, we obtain further a commutative diagram of adjunctions: [row sep=3.5em, column sep =3.5em] ()_^/[r, shift left=1.5] [d, shift right=1.5]()_^/[l, shift left=1.5, "⊥"'] [d, shift left=1.5] [r, shift left=1.5] ()_^/[d, shift left=1.5] [l, shift left=1.5, "⊥"'] (^× C)_A/[r, shift right=1.5] [u, shift right=1.5, "⊣"] (_^A)_A^/[l, shift right=1.5, ""] [u, shift left=1.5, "⊢"'] [r, shift right=1.5] (_())_A/[u, shift left=1.5, "⊢"'] [l, shift right=1.5, ""] in which the vertical adjunctions are all induced by the adjunction (-)∘_ A ⊣_A,-, and the horizontal pairs are given by the induction-restriction adjunctions.§.§ Formulations and comparison theorems We are interested in the right square of (<ref>), from which we obtain a commutative square of Quillen adjunctions between tangent categories:[row sep=3.5em, column sep =3.5em] 𝒯_^() [r, shift left=1.5] [d, shift right=1.5]𝒯_^() [l, shift left=1.5, "⊥"'] [d, shift left=1.5]_A^_^A [r, shift right=1.5] [u, shift right=1.5, "⊣"]_A_()[l, shift right=1.5, ""] [u, shift left=1.5, "⊢"'] In light of this, we will show that the cotangent complex_A(resp. Hochschild complex_A^) can be represented via the cotangent complex_^(resp. Hochschild complex_^), in a very natural fashion. We will use the same notation (-)∘^_ A ⊣^_A,- to signify the two vertical adjunctions of (<ref>): (-)∘^_ A : *𝒯_^()_A^_^A : ^_A,-, and (-)∘^_ A : *𝒯_^()_A_() : ^_A,-. By construction, for each M∈𝒯_^() (or 𝒯_^()), the spectrum object M∘^_ A is simply taken degreewise, i.e.(M∘^_ A)_m,n=M_m,n∘_ A.The case of ^_A,- is rather different. Namely, for each N∈_A_() (or _A^_^A), the spectrum object ^_A,N is given at each bidegree (m,n) as the pullback (^_A,N)_m,n[r][d]_A,N_m,n[d][r]^ℓ_A _A where the map _A,N_m,n_A (=_A,A) is induced by the structure map N_m,n A.The following plays a pivotal role in the study of various cohomologies of operadic algebras. Suppose that A is levelwise cofibrant. Then there are weak equivalences _^∘^_ A ≃_A and_^∘^_ A ≃_A^ in _A_() and _A^_^A respectively. We will prove the first weak equivalence. The other can then be verified in the same fashion. By definition, _^ is represented by a constant spectrum of the form Σ^∞(^⊔(^)^) ∈𝒯_^() in which (^)^ refers to any cofibrant model for ^∈(), and the coproduct ^⊔(^)^ is considered as an object over and under ^. We have a chain of weak equivalences _^∘^_ A ≃Σ^∞(^⊔(^)^)∘^_ A= Σ^∞((^⊔(^)^)∘_ A) ≃Σ^∞(A⊔ A^) where A^ := (^)^∘_ A represents a cofibrant model for A∈_(). As above, Σ^∞(A⊔ A^) is a model for _A. So we obtain a weak equivalence _^∘^_ A ≃_A as expected. According to<ref>,_^∈𝒯_^()admits a suspension spectrum replacement given by∘_C^∙with(∘_C^∙)_n,n=∘_C^n, when provided thatisΣ-cofibrant. Due to this, we can describe the cotangent complex_Aas follows.Suppose that the triple (,, A) is sufficient. Then for every n∈ℕ we have a weak equivalence in _()_A//A: (∘_C^n)∘_ A ≃Σ^n(A ⊔ A)in which the right hand side is the n-suspension of A⊔ A ∈_()_A//A. Consequently, the cotangent complex _A ∈_A_() admits a suspension spectrum replacement:(∘_C^∙)∘^_ A ≃_A. As discussed in <ref>, we have a weak equivalence ∘_C^n≃Σ^n(^⊔^) in ()_^//^. By the assumption thatis Σ-cofibrant and that A is cofibrant, we obtain that the relative composite product (∘_C^n)∘_ A has the right homotopy type (cf. <cit.>, 15.2). Thus we obtain a chain of natural weak equivalences (∘_C^n)∘_ A ≃(Σ^n(^⊔^)) ∘_ A ≃Σ^n(A⊔ A) where the latter weak equivalence is due to the left Quillen functor (-)∘_ A : ()_//_()_A//A. The second statement follows from the above, because the functor (-)∘^_ A is computed degreewise. (Of course, it can also be immediately derived from Proposition <ref>).From a set-theoretic perspective, each element of Σ^n(A ⊔ A) can be represented by an operation μ∈, and together with a tuple (a_1,⋯,a_k) ∈ A^× k (with k being the arity of μ) such that each of these is attached to a specific point on a distinguished n-sphere. Formally, such an element can be written as [ μ ; (τ_1,a_1),⋯, (τ_k,a_k) ]in which each τ_i ∈^n represents the location of a_i on the n-sphere it belongs to. Moreover, such elements are subject to an equivalence relation as follows [μ ; (τ_1,a_1),⋯,(τ_i, ν·(b_1,⋯,b_l)) ,⋯, (τ_k,a_k)]∼ [ μ∘_i ν ; (τ_1,a_1),⋯,(τ_i, b_1),⋯,(τ_i, b_l) ,⋯, (τ_k,a_k) ]. Under the same assumption as in Proposition <ref>, we have further that under the right Quillen equivalence _A_() _A _^A, the object (∘_C^∙)∘^_ A is identified with itself (∘_C^∙)∘^_ A considered as a prespectrum of the underlying A-modules. (This follows from [<cit.>, Corollary 2.4.8]). Thus, the latter is a model for the derived image of _A in _A _^A.Assume that the triple (,, A) is stably sufficient and consider the extended commutative diagram of Quillen adjunctions[row sep=3.5em, column sep =3.5em] () [r, shift left=1.5, "≃"] [d, shift right=1.5, "(-)∘_ A"']𝒯_() [l, shift left=1.5, "⊥"'] [d, shift left=1.5, "(-)∘^_ A"] [r, shift left=1.5, "≃"] 𝒯_()[d, shift left=1.5, "(-)∘^_ A"] [l, shift left=1.5,"⊥"']_^A [r, shift right=1.5] [u, shift right=1.5, "⊣", "_A,-"']_A_^A[l, shift right=1.5, "", "≃" '] [u, shift left=1.5, "⊢"', "_A,-^"] [r, shift right=1.5]_A_()[u, shift left=1.5, "⊢"', "_A,-^"] [l, shift right=1.5, "", "≃" ']in which the horizontal pairs are all Quillen equivalences. Recall from <ref> that under the right Quillen equivalence 𝒯_^()(), the object ∘_C^∙∈𝒯_^() is identified with _∈(). Combined with the above remark and with the fact that _ is Σ-cofibrant, it yields a weak equivalence in _^A: _∘_ A ≃_A. Here we use the same notation to denote the derived image of _A ∈_A_() in _^A. In summary, we obtain the following conclusion. Suppose that the triple (,, A) is sufficient. The Quillen cohomology of A with coefficients in a given object N∈_A^_^A is computed by the formula ^⋆(A ; N) ≃^((∘_C^∙)∘^_ A, N ). When (,, A) is in addition stably sufficient, the Quillen cohomology of A with coefficients in some N∈_^A is computed by the formula ^⋆(A ; N) ≃^__^A(_∘_ A, N).In what follows, we shall provide some comparison theorems asserting that Hochschild and Quillen cohomologies ofAcan be encoded by the corresponding cohomologies ofitself. We will assume that the triple(,,A)is (at least) sufficient, so that the horizontal adjunctions of (<ref>) are all Quillen equivalences.Suppose we are given a fibrant object N∈_A^_^A.(i) There is a natural weak equivalence ^⋆(A ; N)≃^⋆(^ ; ^_A,N) between Quillen cohomology of A with coefficients in N and Quillen cohomology of ^∈() with coefficients in ^_A,N∈𝒯_^(). (ii) There is a natural weak equivalence ^⋆(A ; N)≃^⋆( ; ^_A,N)between the Hochschild cohomology of A with coefficients in N and Hochschild cohomology ofwith coefficients in ^_A,N. After sending coefficients into 𝒯_^(), we have that ^⋆(^ ; ^_A,N) ≃^(∘_C^∙,^_A,N), and while by Theorem <ref> we have ^⋆(A;N) ≃^((∘_C^∙)∘^_ A,N ). Combined with the adjunction (-)∘^_ A⊣^_A,-, these indeed verify (i). The statement (ii) is verified by combining the adjunction (-)∘^_ A⊣^_A,- with Proposition <ref>. Now assume further that the triple(,,A)is abundant.We arrive at the main result as follows.For a fibrant object N∈_A^_^A, there is a natural weak equivalence ^⋆(A;N)≃Ω^⋆( ; ^_A,N) where the loop space on the right is taken at the zero element (see Remark <ref>). In particular, for every integer n we have an isomorphism ^n(A;N)≅^n+1(;^_A,N). We have in fact a chain of weak equivalences ^⋆(A ; N)≃^⋆(^ ; ^_A,N) ≃Ω^⋆( ; ^_A,N) where the first weak equivalence is due to Theorem <ref>(i), and the second weak equivalence follows from Proposition <ref>. The above theorem admits a stable-analogue as follows. Assume that the triple (,,A) is stably abundant, so that the horizontal adjunctions of (<ref>) are all Quillen equivalences. Similarly as above, we may obtain a chain of weak equivalences ^⋆(A ; N)≃^⋆(^ ; _A,N) ≃Ω^⋆( ; _A,N) in which N is now a fibrant object in _^A, and _A,N is simply an infinitesimal -bimodule. When the triple (,,A) is stably sufficient, an analogue for Hochschild cohomology is now evident, as described via the following chain of weak equivalences: ^⋆(A;N) ≃^__^A(A,N) ≃^_()(,_A,N) ≃^⋆( ; _A,N) (see Examples <ref> and <ref>).§ SIMPLICIAL OPERADS AND ALGEBRAS OVER THEM We assume that the category_Δof simplicial sets comes equipped with the Kan-Quillen model structure. We letbe a fixedC-colored simplicial operad, living in the Dwyer-Kan homotopy theory on(_Δ)(see<ref>). We shall first recall from <cit.> the construction(), i.e. the twisted arrow∞-category of, and recall how the tangent categories atcan be represented via that construction. Due to this, we may then formulate Hochschild and Quillen cohomologies of, as well as cohomologies of-algebras. In the last subsection, we shall formulate the Quillen principle for_n-operads, and from which we obtain the Quillen principle for algebras over_n. §.§ Operadic twisted arrow ∞-categories and tangent categories We will assume thatis fibrant (i.e. every space of operations ofis a Kan complex) andΣ-cofibrant. Recall from <cit.> that the twisted arrow∞-category()is given by the (covariant) unstraightening of the simplicial copresheaf^ : Ib^𝒫_Δwhich encodes the datum ofas an infinitesimal bimodule over itself (see<ref>). In particular,()is endowed with a left fibration() (Ib^𝒫)where(-)denotes the simplicial nerve functor. Unwinding the definitions, objects of () are precisely the operations of(i.e. the vertices of the spaces of operations of ). Let c := (c_1,⋯,c_n;c) and d := (d_1,⋯,d_m;d) be two C-sequences, and let μ∈(c) and ν∈(d) be two operations. The data of a morphism μν in () consist of ∙ a map f:⟨ n ⟩⟨ m ⟩ in _*, ∙ a tuple of operationsα = (α_0, α_1,⋯,α_m) ∈^f_Ib^(c , d), and ∙ an edge p : να^*(μ) in (d) where α^* : (c) (d) is the map corresponding to α via the simplicial functor structure of ^ : Ib^_Δ.Since the map () (Ib^𝒫) is a left fibration between ∞-categories, it implies that a typical morphism (f,α,p) : μν (as in the above remark) is an equivalence in () if and only if α∈^f_Ib^(c , d) is an equivalence in (Ib^𝒫). This leads to a consequence as follows. Let γ, γ' ∈(c) be two operations that are in the same path component of (c). We take an arbitrary edge q : γγ' in (c). This is part of a morphism (_ł n , _c, q) : γ' γ in () where, by construction, _c = (𝕀_c,𝕀_c_1,⋯, 𝕀_c_n) ∈^_ł n _Ib^(c , c) represents the identity morphism on c. Of course, _c is an equivalence in the ∞-category(Ib^𝒫), and thus, (_ł n , _c, q) is an equivalence in (). Consequently, if the space (c) is path connected then any two operations in (c) are equivalent as objects of (). On other hand, two operations ofthat have different arities cannot be equivalent as objects of (), because of the fact that two C-sequences that do not share the same length cannot be equivalent as objects of (Ib^𝒫). (The latter can be readily verified using definition). The mapping spaces of()are given as follows. As above, we regardμ∈(c)andν∈(d)as objects of().<cit.> There is a canonical homotopy equivalence {ν}×^_(d)_Ib^(c , d) _()(μ , ν)in which the map _Ib^(c , d) (d) is given by the composed map _Ib^(c , d) ^__Δ(𝒫(c) , 𝒫(d)) ev_μ𝒫(d)with ev_μ being the evaluation at μ.The two most basic examples are:() ≃Δand() ≃_*^. For each1≤ n < ∞, we will denote by_nthe simplicial version of the little n-discs operad, and denote by_∞a fibrant andΣ-cofibrant model for the commutative operad. As in the loc.cit, we have an∞-categorical filtration of the nerve of_*^:(Δ) ≃(_1) (_2) ⋯(_∞) ≃(_*^) This sequence may start with (_0) which can be identified with the subcategory {[1][0]} of the simplex category Δ. Here we recall by definition that _0 is the single-colored operad such that _0(n) is a singleton when n≤ 1 and is empty otherwise, and by construction _0 encodes pointed simplicial sets.Recall that the space _n(k) is in particular path connected for every k≥0 and n≥2. According to remarks <ref> and <ref>, we obtain that(_n) (with n≥2) admits a minimal skeleton given by the full subcategory generated by a collection {μ_k}_k≥0 of operations in which each μ_k ∈_n(k) is selected arbitrarily. Moreover, in spite of the fact that _1's spaces of operations are not path connected except for k≤1, we still have that any two operations in _1(k) are equivalent as objects of (_1) for every k≥0 (and as we have seen, (_1) is simply modeled by (Δ)). We will writeto denote the ∞-category of spectra, defined as the stabilization of the∞-category of pointed simplicial sets. One of the main interests in the construction(-)is as follows. <cit.>There is an equivalence of ∞-categories _^()_∞≃(() , )Consequently, there is a chain of equivalences of ∞-categories: _(_Δ)_∞≃__C(_Δ)_∞≃_^()_∞≃_^()_∞≃(() , ) .Let M∈_^() be a prespectrum whose datum consists of for each m≥0 a sequence of maps ^ M_m,m^. We will write M:() for the functor corresponding to M. Unwinding the definitions, M is given at each operation μ∈(c) by taking M(μ)∈ to be a prespectrum with M(μ)_m,m≃{μ}×^_(c) M_m,m(c). Moreover, for a morphism μν in () (with ν∈(d)) that lies above a morphism cd in _Ib^𝒫(c , d), we get a natural map M(μ)_m,mM(ν)_m,m induced by the commutative square M_m,m(c) [r][d](c) [d] M_m,m(d) [r](d) where the vertical maps are due to the infinitesimal -bimodule structures on M_m,m and ^.When =_1 (resp. _∞), various tangent categories at this operad are equivalent to ((Δ) , ) (resp. ((_*^) , )). In particular, the identification __∞(_Δ)_∞≃((_*^) , )proves that the stabilization of ∞-operads is equivalent to ((_*^) , ), because _∞ is a terminal object in the ∞-category (_Δ)_∞.§.§ Hochschild and Quillen cohomologies of simplicial operads and algebras over them As above, we will assume thatis fibrant andΣ-cofibrant. According to Theorem <ref>, various cotangent complexes and Hochschild complex ofcan be represented as a functor() . As in [<cit.>,6.3], we let_ : () denote the image of_^∈_^()_∞through the identification_^()_∞≃(() , ) .According to<ref>,_coincides with the image of∘_C^∙∈_^()through the equivalence (<ref>), and alternatively,_agrees with the image of_[1] ∈_(_Δ)_∞under the identification_(_Δ)_∞≃(() , ).In order to give an explicit description of_, we will make use of the following. Let f: be a map between fibrant and Σ-cofibrant simplicial operads. Then the derived functor of the right adjoint f^* : _(_Δ) _(_Δ) sends _ to _.It will suffice to show that the derived functor of the right adjoint f^* : _^() _^() sends ∘_D^∙ to ∘_C^∙, where D refers to the set of colors of . Here by construction the above functor is induced by the right adjointf^* : ()_^//^()_^//^ that takes each M ∈()_^//^ to f^*(M) := ^×_(^)^* M^*, where the pullback is taken in () and M^* denotes the infinitesimal -bimodule with M^*(c_1,⋯,c_n;c) := M(f(c_1),⋯,f(c_n);f(c)), (while the case of (^)^* is defined similarly). Unwinding the definitions, we just need to verify for each m≥0 the existence of a natural weak equivalence in () of the form ∘_C^m^×^_(^)^* (∘_D^m)^* (see also [<cit.>, Corollary 2.4.8]). Moreover, after having fixed a Kan-model for the simplicial m-sphere^m, it suffices to show that the canonical map ∘_C^m^×_(^)^* (∘_D^m)^* is an isomorphism. This is verified because we have on each level a Cartesian square of the form (c_1,⋯,c_n;c)× (^m)^× n[r][d](f(c_1),⋯,f(c_n);f(c))× (^m)^× n[d](c_1,⋯,c_n;c) [r](f(c_1),⋯,f(c_n);f(c)) The assertion above reflects one of the hallmarks of the cotangent complex of simplicial operads, or more generally, operads enriched over a Cartesian monoidal category. We also warn the readers that, when working on a base category that is not Cartesian (e.g. dg modules over a commutative ring), the circumstance might vary significantly.Note that under the identification of Theorem <ref> the functorf^* : _(_Δ) _(_Δ)corresponds to the functor(f)^* : (() , ) (() , )given by the restriction along(f) : () (). Combined with the above proposition, it proves that the functor_is equivalent to the composed functor() () _.Now, without loss of generality we can assume thatcomes equipped with a map of operads_∞. The following is an immediate consequence.The functor _ is equivalent to the composed functor () (_∞) __∞. As presented in [<cit.>, 6.3], the functor __∞ : (_*^) is given on objects by __∞(ł m )̊ =𝕊^⊕ m (i.e. the m-fold coproduct of the sphere spectrum); and moreover, for each map f : ł n ł m $̊ in_*, the structure map__∞(f) : 𝕊^⊕{1,⋯,m}𝕊^⊕{1,⋯,n}is defined by, for eachi∈{1,⋯,m}, copying thei'th summand to the summands of positionj∈ f^-1(i)when this fiber is nonempty, or collapsing that summand to the zero spectrum otherwise.The functor __∞ can be equivalently defined by letting __∞(ł m )̊ = [ł m ,̊𝕊]_* where[- , -]_* refers to the powering of pointed simplicial sets over spectra, and ł m $̊ is considered as a (discrete) pointed simplicial set with base point0. From this point of view, we may think of__∞as the spectral version of Pirashvili's functort : _*^(k),withkbeing some commutative ring, defined by takingt(ł m )̊ := [ł m ,̊k]_*, i.e. thek-module of based mapsł m k(wherekhas base point0_k). This functortplayed a very active role in the author's works, e.g. <cit.> and [<cit.>, chapter 13]. We shall revisit this remark later.Here is the fundamental theorem concerning the cotangent complex of simplicial operads. <cit.> Under the equivalence _^()_∞≃(() , ), the cotangent complex _^∈_^()_∞ is identified with _ : () which is given on objects by sending each operation μ∈ of arity mto _(μ) = 𝕊^⊕ m. Consequently, under the equivalence _(_Δ)_∞≃(() , ), the cotangent complex _∈_(_Δ)_∞ is identified with _[-1] the desuspension of _.An immediate consequence is as follows. After sending coefficients into (() , ), the Quillen cohomology ofwith coefficients in a given functor : () is computed by ^⋆( ; ) ≃_(() , )(_[-1], ). Moreover, the n'th Quillen cohomology group ofwith coefficients inis computed by ^n( ; ) ≅π_0_(() , )(_[-1], [n]) ≅π_0_(() , )(_, [n+1]). Using Corollary <ref>, we obtain a chain of weak equivalences ^⋆( ; ) ≃_(() , )(_[-1], ) ≃_((_∞) , )(__∞[-1], _*) ≃^⋆(_∞ ; _*)where _* ∈((_∞) , ) refers to the right Kan extension of : () along the functor () (_∞). This tells us that Quillen cohomology of every operad can be seen as a particular case of the cohomology of _∞.The statement below proves that Quillen cohomology of_∞can be built up from Quillen cohomology of the operads_n's. Let : (_∞) be given and let_ndenote the composed functor(_n) φ_n(_∞) .For every integer k there is an exact sequence of abelian groups 0 nlim^1 ^k-1(_n ; _n) ^k(_∞ ; )nlim^k(_n ; _n)0where lim^1 denotes the first right derived functor of the limit-functor on towers of abelian groups.The filtration (<ref>) gives rise to a sequential limit of ∞-categories: ((_∞),) ⋯((_2),) ((_1),). Combined with Corollary <ref>, the sequence above yields a sequential limit of ∞-groupoids: [__∞, [k+1]]_^(_∞)⋯ [__2, _2[k+1]]_^(_2) [__1, _1[k+1]]_^(_1). (Here for brevity we write [-,-]_ to denote the mapping space in ). We consider the above spaces as a pointed space with the zero map as its base point. Now, the first Milnor exact sequence (see e.g. [<cit.>, 9.3]) determines an exact sequence of abelian groups 0 nlim^1 π_1[__n, _n[k+1]]_^(_n)π_0[__∞, [k+1]]_^(_∞)nlimπ_0[__n, _n[k+1]]_^(_n) 0. Applying the formulae given in Remark <ref>, we obtain the expected exact sequence (<ref>).Of course, an analogue of (<ref>) holds foras long as we obtain a filtration of . Then we may refer to the obtained exact sequence as the Milnor exact sequence for Quillen cohomology of simplicial operads. In what follows, we discuss on Hochschild cohomology of. To proceed with the two statements below, it is sufficient forto be fibrant only.Under the equivalence _^()_∞≃(() , ), the Hochschild complex _^∈_^()_∞ is identified with 𝕊 : (), i.e. the constant functor with value 𝕊∈.According to Proposition <ref>, the Hochschild complex _^ can be modeled by the suspension spectrum ×𝕊 with (×𝕊)_m,m(c_1,⋯,c_n;c) = (c_1,⋯,c_n;c) ×^m. We write ℋ_ : () for the functor corresponding to _^. As in Proposition <ref>, we may show that the derived functor of the right adjoint __∞^(_∞) _^() sends __∞^ to _^. Thus, as in Corollary <ref>, we obtain that ℋ_ is equivalent to the composed functor () (_∞) ℋ__∞. Therefore, the proof will be completed after showing that ℋ__∞ : (_*^) is equivalent to the constant functor with value 𝕊. For any object ł n $̊ of_*^, by constructionℋ__∞(ł n )̊∈is the prespectrum withℋ__∞(ł n )̊_m,mbeing given byℋ__∞(ł n )̊_m,m≃{*}×^__∞(n)(_∞× 𝕊)_m,m(n) ≃ (_∞× 𝕊)_m,m(n) ≃^m(see Remark <ref>). Thus we obtain thatℋ__∞(ł n )̊≃𝕊. Clearly the functor structure maps ofℋ__∞are all the identity on𝕊. We just verified thatℋ__∞is equivalent to𝕊. Due to the above proposition, we may formulate Hochschild cohomology ofas follows.After sending coefficients into (() , ), the Hochschild cohomology ofwith coefficients in a given functor : () is computed by ^⋆( ; ) ≃_(() , )(𝕊, ). In particular, the n'th Hochschild cohomology group ofwith coefficients inis computed by^n( ; ) ≅π_0_(() , )(𝕊, [n]) ≅π_-n lim. Now, letAbe a cofibrant-algebra in_Δ, classified by a mapℓ_A : _Aof operads. In the remainder of this subsection, we shallformulate Hochschild and Quillen cohomologies ofAusing the comparison theorems obtained from<ref>. Due to the equivalence_A_() ≃_A^_^A(see Theorem <ref>), the two cohomologies ofAwill take coefficients in the same category_A^_^A. LetN∈_A^_^Abe a levelwise fibrantΩ-spectrum whose datum consists of for eachk≥0a sequence of mapsA^ N_k,k A^in_^Asuch that the structure mapN_k,k A^is a fibration, and in addition, the squaresN_k,k[r][d] A^[d] A^[r] N_k+1,k+1are all homotopy Cartesian. As discussed in Remark <ref>, the right Quillen functor ^_A,- : _A^_^A 𝒯_^() sends N to ^_A,N∈𝒯_^() which is a levelwise fibrant Ω-spectrum such that (^_A,N)_k,k is described via a Cartesian square of the form (^_A,N)_k,k[r][d]_A,N_k,k[d][r]^ℓ_A _A We will make use of Remark <ref> to describe the corresponding functor ^_A,N : () . Let c:=(c_1,⋯,c_n;c) be a C-sequence. ^_A,N is given on objects by sending each operation μ∈(c) to an Ω-spectrum ^_A,N(μ)∈ with ^_A,N(μ)_k,k = {μ}×_(c)(^_A,N)_k,k(c) ≅{ℓ_A(μ)}×__A(c)_A,N_k,k(c). More explicitly, ^_A,N(μ)_k,k agrees with the fiber over ℓ_A(μ) of the simplicial map __Δ(A(c_1)×⋯× A(c_n) , N_k,k(c)) __Δ(A(c_1)×⋯× A(c_n) , A(c)) that is induced by the structure map N_k,k(c) A(c) of N. By having a description of^_A,Nas above, and by combining Theorems <ref>, <ref> with the comparison theorems of<ref>, the two cohomologies ofAare formulated as follows. (i) The Quillen cohomology of A with coefficients in N is computed by ^⋆(A ; N) ≃^⋆(^ ; ^_A,N) ≃Ω^⋆( ; ^_A,N) ≃_(() , )(_, ^_A,N). (ii) The Hochschild cohomology of A with coefficients in N is computed by ^⋆(A ; N) ≃^⋆( ; ^_A,N) ≃_(() , )(𝕊, ^_A,N). In particular, the n'th Hochschild cohomology group of A with coefficients in N is given by^n(A ; N) ≅π_-n lim^_A,N.More examples concerning the obtained results will be given in the remainder of the paper. §.§ Quillen principle for _n-algebras By convention, a simplicial operad is said to be unitally homotopy connected if its spaces of nullary and unary (=1-ary) operations are all weakly contractible. Typical examples for this property include the operads_nfor0≤ n ≤∞.In this subsection, we will assume thatis a single-colored simplicial operad which is fibrant,Σ-cofibrant and unitally homotopy connected.For an ∞-categorycontaining an object x and for a spectrum object T∈, we will write x_![T] : (resp. x_*[T] :) to denote the left Kan extension (resp. right Kan extension) of the embedding {T} along the inclusion {x}. Unwinding the definitions, the functor x_![T] : sends an object y∈ to x_ ≃_(x,y)⊗ T where -⊗- refers to the copowering of simplicial sets over spectra. On other hand, the functor x_*[T] : sends y to x_*[T](y) ≃ [_(y,x), T] where [-,-] denotes the powering of simplicial sets over spectra. In particular, when T = 𝕊 then we have x_ ≃_(x,y)⊗𝕊≃Σ^∞_+_(x,y), and while x_*[𝕊](y) ≃ [_(y,x), 𝕊] ≃ (Σ^∞_+_(y,x))^∨, i.e. the dual spectrum of the suspension spectrum Σ^∞_+_(y,x).Suppose given a functor :. By adjunction, a morphism x_![T] (resp. x_*[T]) in (,) can be represented by a map T (x)(resp. (x)T) of spectra. The following provides a spectral version of an assertion obtained by Pirashvili [<cit.>,1.4] (see also Remark <ref>). Recall that (_∞) ≃(_*^), and hence, the objects of (_∞) are given by {ł m| m ≥ 0 }. Let T∈ be given. Since ł 0 $̊ is a terminal object, it implies thatł 0 _̊*[T] ≃T, i.e. the constant functor(_∞) with valueT. Whilst the functorł 1 _̊*[T] : (_∞) is given on objects był 1 _̊*[T](ł m )̊≃ [__*(ł 1 ,̊ł m )̊, T] ≃ T^⊕ (m+1).Now consider the caseT=𝕊. We find a fiber sequence in((_∞),)of the form__∞ιł 1 _̊*[𝕊] ł 0 _̊*[𝕊] ≃𝕊where the first map is classified by the identity𝕊 = __∞(ł 1 )̊𝕊, and also, the second map is classified by the identity𝕊 = ł 1 _̊*[𝕊](ł 0 )̊𝕊(see Remark <ref>). For this, it suffices to observe that for eachł m $̊ the map ι is given by the embedding __∞(ł m )̊≃𝕊^⊕ m𝕊^⊕ (m+1)≃ł 1 _̊*[𝕊](ł m )̊ whose image leaves out the summand 𝕊 corresponding to the constant map ł 1 ł m $̊ with value0. We shall now prove a generalized version of (<ref>). As usual, we write𝕀∈(1)standing for the identity operation, and we letμ_0 ∈(0)be a nullary operation selected arbitrarily. We will regard the two as objects of(). Recall by assumption thatis unitally homotopy connected. Therefore according to [<cit.>, 6.2], we obtain that μ_0 is a terminal object in (); and in fact, every nullary operation represents a terminal object in ().We will need to use the following lemma.For any operation μ∈(m) considered as an object of (), there is a weak equivalence of spaces _()(μ,𝕀) ≃(2) ⊔m in which m = {1,⋯,m} represents a discrete space of cardinality m. Firstly, sincehas a single color, we may identify objects of Ib^𝒫 with those of _*. We have in fact a chain of weak equivalences _()(μ,𝕀) ≃{𝕀}×^_(1)_Ib^(ł m ,̊ł 1 )̊≃_Ib^(ł m ,̊ł 1 )̊≃(2) ⊔min which the first weak equivalence is due to Proposition <ref>, while the second follows from the assumption that (1)≃ *; and moreover, the last weak equivalence is interpreted as follows. We have by construction that _Ib^(ł m ,̊ł 1 )̊ = f (k_0) ×(k_1) ×⋯×(k_m) where the coproduct ranges over those maps f∈__*(ł 1 ,̊ł m )̊ and each k_i∈ refers to the cardinality of the fiber f^-1(i). Observe now that the summand that corresponds to the constant map ł 1 ł m $̊ with value0is given by(2)×(0)^× m, whilst for everyj∈{1,⋯,m}the summand corresponding to the map[ξ^j : ł 1 ł m ,̊ 1 ↦ j]is simply given by(1)×(1) ×(0)^× (m-1). We complete the proof using the assumption that(0) ≃(1)≃ *.A generalization of (<ref>) is as follows. There is a fiber sequence in ((),) of the form _ι𝕀_*[𝕊] ρ(Σ^∞_+(2))^∨where (Σ^∞_+(2))^∨ refers to the constant functor () with value (Σ^∞_+(2))^∨ (see Remark <ref> for notation).Concretely, the map _ι𝕀_*[𝕊] is classified by the identity 𝕊 = _(𝕀) 𝕊, and the map ρ is given as follows. Observe first that a map 𝕀_*[𝕊] (Σ^∞_+(2))^∨ is completely determined by a map inof the form 𝕀_*[𝕊](μ_0)(Σ^∞_+(2))^∨, due to the fact that μ_0 is a terminal object in () (see Remark <ref>). Combining Remark <ref> with Lemma <ref>, we obtain for each operation μ∈(m) a chain of equivalences in :𝕀_*[𝕊](μ) ≃ [_()(μ,𝕀), 𝕊] ≃ [(2) ⊔m, 𝕊] ≃ (Σ^∞_+(2))^∨⊕𝕊^⊕ m. In particular, we obtain that 𝕀_*[𝕊](μ_0) ≃ (Σ^∞_+(2))^∨. Accordingly, we would take ρ to be the map classified by the identity on (Σ^∞_+(2))^∨. Now the sequence (<ref>) is given at each μ∈(m) by the sequence _(μ) [r][d]_= 𝕀_*[𝕊](μ) [r][d]^≃ (Σ^∞_+(2))^∨(μ) [d]^= 𝕊^⊕ m[r] (Σ^∞_+(2))^∨⊕𝕊^⊕ m[r] (Σ^∞_+(2))^∨ , which is clearly a fiber sequence in . The proof is therefore completed.The proposition above immediately leads to a consequence as follows.For a functor : (), there is a fiber sequence of mapping spaces_( (Σ^∞_+(2))^∨,lim) _((),)(𝕀_*[𝕊],) _((),)(_,) Due to Theorem <ref> and Remark <ref>, one may view the sequence (<ref>) as establishing a relation between Quillen and Hochschild cohomologies of operads in general. Furthermore, we will see that whenis given by the little n-discs operad, then the connection becomes much more convenient, as discussed in what follows. We now consider the case where = _nwithn<∞. Let us first outline the steps to establish Proposition <ref> below. As above, we write𝕀∈_n(1)for the identity operation, and writeμ_0∈_n(0)for the unique nullary operation of_n. Let us consider the functor(μ_0)_![𝕊] : (_n) .We letμ∈_n(k)be a fixedk-ary operation regarded as an object of(_n). Due to Proposition <ref>, we obtain that_(_n)(μ_0,μ) ≃{μ}×^__n(k)_n(k+1)where the map_n(k+1) _n(k)is given by_n(k+1)∋ν↦ν∘_1 μ_0(i.e. given by deleting the firstn-disc inνwhen consideringνas a rectilinear embedding ofk+1disjointn-discs in anothern-disc). Thus we obtain further a weak equivalence_(_n)(μ_0,μ) ≃k⋁^n-1where the latter space is thek-fold wedge sum of the(n-1)-sphere. Therefore, as in [<cit.>,6.3] we have that(μ_0)_≃ _(_n)(μ_0,μ) ⊗𝕊 ≃(k⋁^n-1) ⊗𝕊 ≃ 𝕊⊕(𝕊[n-1])^⊕ k.If we consider the functor(μ_0)_![𝕊[-n+1]] : (_n) instead, then we get that(μ_0)_![𝕊[-n+1]](μ)≃ Ω^n-1[ (μ_0)_ ]≃ 𝕊[-n+1] ⊕𝕊^⊕ k.We also regard the functor𝕀_*[𝕊] : (_n) . According to Lemma <ref>, and along with the fact that_n(2)≃^n-1, we obtain that𝕀_*[𝕊](μ)≃[_(_n)(μ,𝕀), 𝕊]≃[^n-1⊔ k, 𝕊]≃ [^n-1, 𝕊] ⊕ [k, 𝕊] ≃ (𝕊⊕𝕊[-n+1]) ⊕𝕊^⊕ k.Let us now consider the canonical morphism in((_n),):ε : (μ_0)_![𝕊[-n+1]] 𝕀_*[𝕊]that is classified by the inclusionε : 𝕊[-n+1] 𝕊⊕𝕊[-n+1] ≃𝕀_*[𝕊](μ_0). More explicitly, the morphismεis given at the operationμby an embedding of the form(μ_0)_![𝕊[-n+1]](μ)≃ 𝕊[-n+1] ⊕𝕊^⊕ k𝕊⊕𝕊[-n+1] ⊕𝕊^⊕ k ≃ 𝕀_*[𝕊](μ)whose image leaves out the distinguished summand“𝕊” that is contained in[^n-1 , 𝕊] ≃𝕊⊕𝕊[-n+1].The above analyses prove the following.There is a fiber sequence in ((_n),) of the form (μ_0)_![𝕊[-n+1]] ε𝕀_*[𝕊]π𝕊where as usual 𝕊 denotes the constant functor (_n) with value 𝕊, and the morphism π is canonically classified by the projection 𝕀_*[𝕊](μ_0) ≃𝕊⊕𝕊[-n+1] 𝕊.We recall that (_1) ≃(Δ). Therefore, in the case n=1, the sequence (<ref>) is a fiber sequence in ((Δ),) that takes the form [0]_![𝕊] ε [1]_*[𝕊]π𝕊. Let us give an explicit description of the morphism ε above, which is an interesting one. By definition, ε is classified by the inclusion ε : 𝕊𝕊⊕𝕊 = [1]_*[𝕊]([0]). We will let ε(𝕊) be the summand that corresponds to the map β^1 : [0][1] , 0 ↦ 1. Now the morphism ε is given at each object [k] of (Δ) by an embedding of the formε([k]) : _Δ([0],[k]) ⊗𝕊=𝕊^⊕ (k+1)𝕊^⊕ (k+2)= [_Δ([k],[1]), 𝕊].For each map [α^i : [0][k], 0 ↦ i], we write (α^i, 𝕊) standing for 𝕊 itself regarded as a summand of the left hand side indexed by α^i. Moreover, for a map f : [k][1], we write 𝕊^f to denote 𝕊 in its role as a summand of the right hand side that corresponds to f. Accordingly, ε([k]) is determined by copying each (α^i, 𝕊) to those summands 𝕊^f with f being such that f∘α^i = β^1 (i.e. such that f(i)=1). In particular, the image of ε([k]) leaves out only the summand 𝕊^_0 in which_0 signifies the constant map [k][1] with value 0.Now applying the fiber sequence (<ref>) to the casewhere = _n, we obtain a fiber sequence in((_n),)of the form__n𝕀_*[𝕊] (Σ^∞_+_n(2))^∨≃𝕊⊕𝕊[-n+1].Combining this with the fiber sequence (<ref>), we obtain a corollary as follows.There is a diagram of Cartesian squares in ((_n),) of the form __n[r][d] (μ_0)_![𝕊[-n+1]] [r][d]𝕀_*[𝕊] [d] 0 [r]𝕊[-n+1] [r]𝕊⊕𝕊[-n+1] where the last vertical map is induced by the identity 𝕀_*[𝕊](μ_0) ≃𝕊⊕𝕊[-n+1] 𝕊⊕𝕊[-n+1]. In particular, the left square of the above diagram is also Cartesian. It allows us to recover a result presented in [<cit.>,6.3] as follows.<cit.> There isa fiber sequence in ((_n),) of the form (μ_0)_![𝕊] 𝕊__n[n] As we have seen, the functor__nrepresents the cotangent complex of_n, and while𝕊represents the Hochschild complex of_n(see Proposition <ref>). In light of this, we will refer to the fiber sequence(<ref>) as the Quillen principle for the operad_n.In the remainder, we shall explain how the Quillen principle for_n-algebras can be formulated. We will need to use a lemma as follows. Letbe a simplicial operad as above. We will writeη_ : _^(ℰ_*) ^for the map in()that is classified byμ_0 ∈(0)(see<ref> for notations). Under the equivalence of ∞-categories ((),)≃_^()_∞, the map (μ_0)_![𝕊]𝕊 is identified with the canonical map Σ^∞_+(η_) _^ in which as usual Σ^∞_+(η_) denotes the image of η_ through the functor Σ^∞_+ : ()_/^_^().We let S := (_Δ)_∞ denote the ∞-category of spaces. We also write (μ_0)_! : S((),S) for the induction along the inclusion {μ_0}(). We have in fact a commutative diagram of the form S[r]^(μ_0)_! [d]_≃ ((),S) [r]^Σ^∞_+[d]^≃ ((),) [d]^≃ Σ_*(S)_/ℰ_*[r]__^ ()_∞_/^[r]_Σ^∞_+ _^()_∞in which the left vertical functor is the obvious equivalence sending each space X to the Σ_*-object given by X itself concentrated in level 0 and equipped with an evident map to ℰ_*, and while the middle vertical functor is the equivalence induced by the straightening-unstraightening constructions (cf. [<cit.>, 6.3]). Let us start with the terminal object Δ^0 ∈S. Clearly the image of Δ^0 through the composed horizontal functor is identified with (μ_0)_![𝕊] ∈((),). On the other hand, observe that the left vertical functor sends Δ^0 to nothing but _ℰ_*∈Σ_*(S)_/ℰ_*, whose image through _^ is exactly the map η_. Now, the commutativity of the diagram proves that (μ_0)_![𝕊] is identified with Σ^∞_+(η_) under the equivalence ((),)≃_^()_∞.Moreover, by Proposition <ref>, we obtained that 𝕊 is identified with the Hochschild complex _^. To complete the proof, we argue that the two maps (μ_0)_![𝕊]𝕊 and Σ^∞_+(η_) _^ are both classified by the operation μ_0. Combining the above lemma with the fiber sequence (<ref>), and along with Theorem <ref> we obtain a corollary as follows. There is a fiber sequence in __n^(_n) of the formΣ^∞_+(η__n) __n^__n[n+1]Here we use the same notation for the derived image of __n∈__n(_Δ) under the equivalence __n(_Δ) ≃__n^(_n).The fiber sequence (<ref>) is in fact the operadic source of (<ref>), and in particular, can be viewed as the standard version of the Quillen principle for _n. One might think of another proof of the existence of (<ref>) that is more direct; nevertheless, it may be much more complicated. We are now in position to formulate the Quillen principle for _n-algebras. LetAbe a cofibrant_n-algebra. We let_A(*)denote the freeA-module over_ngenerated by a singleton, and letη_A : _A(*)A^be the map in__n^Aclassified by the unit ofA. As usual, we have a canonical mapΣ^∞_+(η_A) _A^whereΣ^∞_+(η_A)refers to the image ofη_Athrough the functorΣ^∞_+ : (__n^A)_/A^_A^__n^A.There is a fiber sequence in _A^__n^A of the formΣ^∞_+(η_A) _A^_A[n]where we use the same notation for the derived image of _A∈_A__n(_Δ) under the equivalence _A__n(_Δ) ≃_A^__n^A. Equivalently, _A[n] is weakly equivalent to _A^/_A(*) the relative cotangent complex of η_A.First, note that the fiber sequence (<ref>) can be equivalently rewritten as Σ^∞_+(η__n) __n^__n^[n] where we use the same notation for the derived image of __n^∈__n^(_n) under the equivalence __n^(_n) ≃__n^(_n). Let us now apply the left Quillen functor (-)∘^_ A : 𝒯_^()_A^_^A (see Notation <ref>) to that fiber sequence. Then we may complete the proof by using Corollary <ref>, which proves that Σ^∞_+(η__n)∘^_ A ≃Σ^∞_+(η_A); and along with Proposition <ref>, from which we obtain the following weak equivalences:__n^∘^_ A≃_A^ and __n^[n] ∘^_ A ≃_A[n]. Consequently, for a given fibrant object N ∈_A^__n^A, we obtain a fiber sequence connecting the two cohomologies of A as follows: ^(_A[n],N) [r][d]_≃ ^(_A^,N) [r][d]^≃ ^(Σ^∞_+(η_A),N) [d]^≃ Ω^n^⋆(A;N) [r]^⋆(A;N) [r]^_(__n^A)_/A^(η_A, Ω^∞_+N) Moreover, when Ω^∞_+N is of the form Ω^∞_+N = [A^× MA^] for some M ∈__n^A, then by adjunction we obtain a chain of weak equivalences ^_(__n^A)_/A^(η_A, Ω^∞_+N)≃^___n^A(_A(*), M) ≃ M. To end this section, we provide a version of the Quillen principle for_n-algebras in a stable category. Suppose we are given a sufficient base categoryequipped with a weak monoidal Quillen adjunction*_Δin the sense of[<cit.>,3.2]. We will use the same notation for the-enriched version of the operad_n.Let A ∈__n() be a cofibrant_n-algebra in . There is a fiber sequence in _A^__n^A() of the form Σ^∞_+(η_A) _A^_A[n](i.e. the same as in Theorem <ref>). Whenis in addition strictly pointed stable, then there is a fiber sequence in __n^A() of the form _A(*) η_A A^_A[n]Clearly both the cotangent and Hochschild complexes of _n are preserved through the changing-bases functor _Δ. It implies that the same fiber sequence as in Corollary <ref> holds for the -enriched version of _n. Then the same argument as in the proof of Theorem <ref> can be applied to obtain the fiber sequence (<ref>). Now assume further thatis stable. In particular, the composed functor ∘ Ω^∞ : _A^__n^A() __n^A() is a right Quillen equivalence (see Theorem <ref>). Then we may obtain the fiber sequence (<ref>) by using [<cit.>, Corollary 2.2.4], which demonstrates that the derived image of the map Σ^∞_+(η_A) _A^ through the functor ∘ Ω^∞ is nothing but the map _A(*) η_A A^.The most basic example is the case whereis the category of symmetric spectra. Another noteworthy example is the case = (k) the category of dg modules over some commutative ring k. Nonetheless, it is a bit subtle that (k) does not come equipped with a left Quillen functor, that is lax monoidal, going from _Δ. Instead, we have a composed functor _Δk[-]_k_≥0(k) ι(k) in which k[-] refers to the free k-module functor, whiledenotes the normalization functor, and ι is the obvious embedding. This composed functor takes _n to its differential graded (dg) version. In the above composition, while k[-] and ι are left adjoints of weak monoidal Quillen adjunctions, is the only functor that does not satisfy the expected property. Despite this, the latter induces a right Quillen equivalence between operadic infinitesimal bimodules,as presented in <ref>. Due to this, we may show that an analogue of (<ref>) holds for the dg version of _n. Then we may deduce the existence of (<ref>) for dg _n-algebras, as in Theorem <ref>. § MORE EXAMPLES§.§ _∞-algebras and Leech ∞-categories Let us recall that__∞(_Δ)_∞≃((_*^),).We will use the same notation“” for the stable model category of spectra. Then the∞-category((_*^),)can be modeled by(_*^,)the projective model category of ordinary functors_*^. The cotangent complex__∞is now represented by__∞[-1] : _*^where the functor__∞is described in Remarks <ref> and <ref>. For a given functor : _*^, we have lim≃(ł 0 )̊ because ł 0 ∈̊_*^ represents a zero object. Accordingly, we will never be interested in Hochschild cohomology of _∞-algebras, as well as _∞ itself; because it does not carry noteworthy information of the object of interest. On the other hand, Quillen cohomology is significant as it will always be. Due to a similarity between__∞and Pirashvili's functort : _*^_kwherekis some commutative ring (see Remark <ref>), we obtained an important result as follows. Suppose given a functorT : _*^_k(also called a right Γ-module). By definition, thek'th stable cohomotopy group ofTisπ^kT := __*^^k(t,T) ≅π_0^_(_*^,(k))(t,T[k]).(<cit.>, 6.3) There is a natural isomorphism π^kT ≅^k-1 (_∞ ; T) between the k'th stable cohomotopy group of T and the (k-1)'th Quillen cohomology group of _∞ with coefficients in the induced functor T : _*^.The above proposition leads to the fact that Robinson's gamma cohomology (cf. <cit.>) can be naturally encompassed by Quillen cohomology of _∞, namely when the coefficient-functors come from right Γ-modules. For more illustration, we shall now describe an alternative approach to Quillen cohomology of (a particular class of)_∞-algebras. In the remainder, we will assume that_∞is a cofibrant model for the operad. Without loss of generality, we may assume further that_∞(0) = {*}.Suppose given a weak equivalence in__∞(_Δ):f : CAwhich exhibitsCas a cofibrant resolution forA. Moreover, we assume thatAcomes from a strictly commutative algebra, via the restriction along_∞. We have a key lemma as follows. Recall that there is a canonical mapη_A : _A(*)A^in__∞^Athat is classified by1_A. As in Proposition <ref>, we have_A(*) ≅__∞(A). We writeA^to denote the underlying monoid ofA. Thus we obtain a simplicial mapη_A : __∞(A)A^that goes between monoids.The map η_A : __∞(A)A^ is a natural weak equivalence of monoids in _Δ. Consequently, the composed map __∞(C) __∞(A)A^ is a weak equivalence of monoids as well.We first show that the map η_A : _A(*)A^ is a weak equivalence. As in <ref>, we write η__∞ : __∞^(ℰ_*) _∞^ for the map in (_∞) that is classified by the unique nullary operation of _∞. Note that η__∞ coincides with the canonical projection _∞[1] _∞ (see Corollary <ref>), which is clearly a weak equivalence. Thus η__∞ is a weak equivalence in (_∞), and moreover, it goes between cofibrant right _∞-modules (cf. <cit.>, Lemma 17.4.3). According to Corollary <ref> again, the map η_A agrees with the map η__∞∘__∞A :__∞^(ℰ_*)∘__∞A_∞^∘__∞A, which is a weak equivalence due to [<cit.>, Theorem 15.1.A]. We now verify that η_A : __∞(A)A^ is compatible with monoid structures. As in [<cit.>, 4.3], an element of __∞(A) can be represented by a formal tuple (μ;a_1,⋯,a_m) in which μ∈_∞(m+1) and a_i ∈ A are all of the same simplicial degree. The unit of __∞(A) is then represented by (𝕀__∞;), and the monoid product is defined by (μ;a_1,⋯,a_m) · (ν;b_1,⋯,b_n) := (μ∘_1ν ; b_1,⋯,b_n,a_1,⋯,a_m). Using the representation above, the map η_A is given by (μ;a_1,⋯,a_m) ↦μ·(1_A, a_1,⋯,a_m). But A comes from a strictly commutative algebra, so we may write μ·(1_A, a_1,⋯,a_m) = a_1 ⋯ a_m ∈ A^. Clearly η_A sends (𝕀__∞;) to 1_A, and moreover, it preserves products because the monoid product in A^ is strictly commutative.For the last claim, it suffices to argue that the map __∞(C) __∞(A) is a weak equivalence as well (see <cit.>, Theorem 17.4.B). The above lemma provides a version of a basic fact asserting that the universal enveloping algebra of a strictly commutative algebra A can be modeled by the underlying monoid of A itself.An advantage from Lemma <ref> is as follows. First, as in the first paragraph of the proof above, we have a weak equivalence__∞(C)C^in__∞^C. This induces a Quillen equivalence*≃___∞(C)__∞^C_C^__∞^C.Now assume further thatAis fibrant. Thus, the map__∞(C)A^induces another Quillen equivalence*≃___∞(C)(__∞(C))_A(A^)in whichAdenotesAitself regarded as a left module overA^. After having identified__∞^Cwith(__∞(C)), we obtain an equivalence of∞-categories _C^(__∞^C)_∞≃_A(A^)_∞which comes from a chain of Quillen equivalences. This is the main motivation for introducing Leech ∞-categories. First, as usual we may write(A^) ≃(𝔹A^, _Δ)in which𝔹A^denotes the simplicial category associated toA^.We will denote by (A) := (A) the covariant unstraightening of the functor A : 𝔹A^_Δ and refer to (A) the Leech ∞-category of A.By construction, (A) is an ∞-category equipped with a left fibration (A) (𝔹A^). Unwinding the definitions, objects of (A) are precisely the vertices of A. Moreover, a morphism ab in (A) consists of a pair (x,p) such that x is an object of (A) and p represents an edge in A of the form p : axb. For more details about the simplicial structure of (A), we refer the readers to [<cit.>, 6.1]. Additionally, the mapping space in (A) is given by _(A)(a,b)≃_b [A m_aA], i.e. the homotopy fiber over b of the multiplication by a. The proposed notion provides a natural generalization of the ordinary Leech categories of discrete commutative monoids (see <cit.>). The main interest of the author is to formulate a cohomology theory for those objects called Leech cohomology.Now, the same argument as in [<cit.>,6.3] can be applied to obtain an equivalence_A(A^)_∞≃((A), ).Combined with (<ref>) and Theorem <ref>, it yields a corollary as follows.There is a chain of equivalences of ∞-categories _C__∞(_Δ)_∞≃_C^(__∞^C)_∞≃((A), ).Again, this result provides a spectral version of an assertion obtained by Leech (see <cit.>, and see <cit.> also). Furthermore, due to Proposition <ref> the cotangent complex _C ∈_C__∞(_Δ)_∞ is identified with a prespectrum valued functor_C : (A) that is given, at each object a ∈(A), by _C(a)_n,n≃_a [(_∞∘^n)∘__∞CA]. Here ^n signifies the Σ_*-object that is given by the n-sphere itself concentrated in level 1. For more details, the map of interest is induced by the composed map (_∞∘^n)∘ C_∞∘CC f A where the first map is simply the natural projection. Accordingly, we obtain another model for Quillen cohomology of C ∈__∞(_Δ) as follows. For a functor : (A), the Quillen cohomology of C with coefficients in is computed by the formula ^⋆(C;)≃_((A), )(_C,). §.§ Twisted arrow ∞-categories of simplicial monoids In this subsection, we consider simplicial monoids, or alternatively, algebras over the associative operadin_Δ. Let us recall that_(_Δ)_∞≃((Δ),).The latter can be modeled by the projective model category of cosimplicial spectra(Δ,).Then the cotangent complex_is represented by the functor_[-1] : Δ, where_is given by the composed functorΔφ_1_*^__∞.The functorφ_1is given by the obvious oneΔ≃() () ≃_*^. More explicitly,φ_1is given on objects byφ_1([n]) = ł n $̊. Moreover, for a map f : [m][n], the structure map φ_1(f) : ł n ł m $̊ is defined by, for each1≤ j ≤ m, takingφ_1(f)^-1(j):= { i | f(j-1) < i ≤ f(j) }.Combining the above with Remark <ref>, we give a description of _ : Δ as follows. For each object [n] of Δ, we have _([n]) = 𝕊^⊕ {1,⋯,n}. We use the standard notations δ^i : [n][n+1] and σ^j : [n+1][n] for the generating maps of Δ. Then the map _(δ^i) : 𝕊^⊕ {1,⋯,n}𝕊^⊕ {1,⋯,n+1} is defined by _(δ^i)(𝕊^{k}) :=𝕊^{k}if 1 ≤ k ≤ i-1,𝕊^⊕ {k,k+1}if k = i, 𝕊^{k+1}otherwise. On the other hand, the map_(σ^j) : 𝕊^⊕ {1,⋯,n+1}𝕊^⊕ {1,⋯,n} is given by _(σ^j)(𝕊^{k}) :=𝕊^{k}if 1 ≤ k ≤ j, 0 if k = j+1, 𝕊^{k-1}otherwise.Of course, the Quillen principle forcan be immediately derived from (<ref>). Nevertheless, we shall provide an alternative proof of the following, which is more direct and of independent interest. There is a fiber sequence in (Δ,) of the form _ [0]_![𝕊] 𝕊First, by definition [0]_![𝕊] sends each map f : [n][m] in Δ to the map 𝕊^⊕ {0,⋯,n}𝕊^⊕ {0,⋯,m} defined by taking each summand 𝕊^{i} to 𝕊^{f(i)}. Nonetheless, we will need to use another model for [0]_![𝕊]. We will write sh : ΔΔ for the functor which sends each [n] to [n+1], and sends a map [n] f [m] to the map sh(f) : [n+1][m+1] such that sh(f) agrees with f on {0,⋯,n} and sh(f)(n+1)= m+1. Let us denote by _ ss^+ := _ ss∘ sh. By adjunction there exists a natural transformation θ : [0]_![𝕊] ^+_ ss which is given by the identity _𝕊 at degree 0. Unwinding the definitions, θ is given for each [n] by the map θ([n]): 𝕊^⊕ {0,⋯,n}𝕊^⊕ {1,⋯,n+1} that takes each summand 𝕊^{i} to 𝕊^⊕ {i+1,⋯,n+1} for i=0,⋯,n. Clearly θ([n]) is a stable equivalence, and hence, we deduce that θ : [0]_![𝕊] ^+_ ss is a weak equivalence. Consider the natural transformation γ : _Δ sh with γ([n]) being given by the injection δ^n+1 : [n][n+1]. Now, γ induces a natural transformation _ ss_ ss^+ such that for each [n] the map _ ss([n]) = 𝕊^⊕ {1,⋯,n}𝕊^⊕ {1,⋯,n+1} = ^+_ ss([n]) is given by the obvious inclusion. We complete the proof by arguing that there is a homotopy cofiber sequence in (Δ,) of the form _ ss_ ss^+ 𝕊. We shall now give an alternative approach to Hochschild and Quillen cohomologies of simplicial monoids. LetAbe a bifibrant monoid in_Δ. It is known that the universal enveloping algebra_^Ais equivalent to the productA × A^, so that we may write_^A≃(𝔹(A × A^), _Δ)where𝔹(A × A^)denotes the simplicial category associated to the monoidA × A^. By abuse of notation, we writeA^ :𝔹(A × A^) _Δfor the simplicial functor corresponding toA^∈_^A. We will denote by (A) := (A^) the covariant unstraightening of the functor A^ :𝔹(A × A^) _Δ and refer to (A) as the twisted arrow ∞-category ofA.We may identify A with an ∞-category that has a single object. Then (A) is equivalent to the (usual) twisted arrow ∞-category of that ∞-category. This observation clarifies the terminology we employed.By definition, (A) is an ∞-category that comes equipped with a left fibration (A) 𝔹(A × A^). Objects of (A) are precisely the vertices of A, while a morphism ab in (A) consists of a triple (x,y,f) in which x,y are objects of (A) and f is an edge in A of the form f : bxay. For more details, the mapping space in (A) is given by _(A)(a,b) ≃_b [A × A^a^* A], i.e. the homotopy fiber over b of the map a^* : A × A^ A, (x,y) ↦ xay. Now, a standard argument shows that there is an equivalence_A^(_^A)_∞≃((A), ).Combined with Theorem <ref>, this leads to a consequence as follows.There is a chain of equivalences of ∞-categories _A_(_Δ)_∞≃_A^(_^A)_∞≃((A), ). Next, due to Theorem <ref>, the Quillen principle for-algebras asserts the existence of a fiber sequence in_A^(_^A):Σ^∞_+(η_A) _A^_A[1]As mentioned above, we may identify a simplicial monoid with an ∞-category that has a single object. Therefore, the above fiber sequence can also be derived from the work of Harpaz-Nuiten-Prasma <cit.> (see Corollary 3.2.10 of the loc.cit). Let us now write_A : (A) for the functor corresponding to the cotangent complex_A ∈_A_(_Δ)_∞. Moreover, as in Lemma <ref>, we may show that under the equivalence_A^(_^A)_∞≃((A), ), the mapΣ^∞_+(η_A) _A^is identified to the canonical map(1_A)_![𝕊] 𝕊. Thus, mapping thesequence (<ref>) into((A), )yields a nice fiber sequence:(1_A)_![𝕊] 𝕊_A[1]In summary, we obtain another model for Hochschild cohomology ofAas follows. Suppose given a functor : (A). The Hochschild cohomology of A with coefficients inis computed by ^⋆(A ; )≃_((A), )(𝕊, ). In particular, for each integer n we have ^n(A ; )≃π_-nlim. Whilst the Quillen cohomology ^⋆(A ; ) can be readily described via ^⋆(A ; ), due to the fiber sequence (<ref>).§.§ Generalized Loday's functor and operadic functor cohomology Many attempts were given in literature to bring different cohomology theories for various types of algebras into the framework of functor cohomology. We shall extend this procedure to be able to encompass all types of operadic algebra. Our approach makes use of the construction(-)again. Let∈_C(_Δ)be a simplicialC-colored operad which is fibrant andΣ-cofibrant. As before, we letS = (_Δ)_∞denote the∞-category of spaces. According to<ref>, an infinitesimal-bimoduleMdetermines a functorM : (Ib^𝒫) S. Taking composition with the functor() (Ib^𝒫), we obtain a functor written asM^⋆ :() S.By construction,M^⋆sends each operationμ∈(c_1,⋯,c_n;c)to the spaceM^⋆(μ) := M(c_1,⋯,c_n;c). One of the main interests in this construction is as follows. There is a weak equivalence of spaceslim M^⋆≃^_()(^, M)We just need to verify the existence of a weak equivalence _((),S)(*, M^⋆)≃_((Ib^𝒫),S)(^, M) in which * denotes the constant functor with value a singleton. To this end, let us first consider the adjunction *((),S)((Ib^𝒫),S)given by the induction-restriction along () (Ib^𝒫). We shall complete the proof by showing that the induction of * is exactly equivalent to^. Indeed, observe first that, due to the straightening-unstraightening constructions, the above adjunction can be represented by the Quillen adjunction between covariant model categories *(_Δ^)_/()(_Δ^)_/(Ib^𝒫) induced by the map () (Ib^𝒫) (cf. <cit.>, 2.2.1). Moreover, under this identification, the functor * corresponds to the terminal object _()∈ (_Δ^)_/(), and while ^ : (Ib^𝒫) S corresponds to nothing but the map () (Ib^𝒫) regarded as an object of (_Δ^)_/(Ib^𝒫) due to the definition of (). We deduce the proposition using the fact that the left adjoint of (<ref>) sends _() to the map () (Ib^𝒫).Next, letbe a base category as in Theorem <ref>. Let us now writeℙfor the-enriched version of. Suppose we are given an infinitesimalℙ-bimoduleMwhich is levelwise fibrant. We will writeMto denote the image ofMthrough the induced functor(ℙ) ().Thus, as above we obtain a functorM ^⋆:() S.By construction,M ^⋆sends each operationμ∈(c_1,⋯,c_n;c)to the underlying space of the objectM(c_1,⋯,c_n;c) ∈. We will refer toM ^⋆as the-valued Loday's functor associated toM.A generalized version of Proposition <ref> is as follows.There is a weak equivalence of spaces limM ^⋆≃^_(ℙ)(ℙ^, M)We have in fact a chain of weak equivalences limM ^⋆≃^_()(^, M) ≃^_(ℙ)(ℙ^, M) in which the first weak equivalence is due to Proposition <ref>, and the second follows from the adjunction *()(ℙ). Whenis in addition stable then the right hand side of (<ref>) represents nothing but the Hochschild cohomology of ℙ with coefficients in M, as discussed in <ref>. This aligns perfectly with the essence of Loday's functor.In what follows, we shall describe some examples concerning the (<ref>). Let us consider the case where = (k)withkbeing any commutative ring. As in Examples <ref>, the dg operadℙis given by applying the following composed functor tolevelwise:_Δk[-](k) _≥0(k) ι(k).Because the situation is somewhat subtle (see Examples <ref>), we will need to do a bit more to obtain an analogue of (<ref>). Consider the following composed functor(ℙ) Ł^_k[](k[]) ()in whichŁ^_k[]is part of the Dold-Kan correspondence as presented in <ref>, while the second functor is simply the forgetful functor. LetKbe a levelwise cofibrant infinitesimalℙ-bimodule valued in_≥0(k). We will writeKto denote the image ofKthrough the composed functor (<ref>). Thus we have a functorK^⋆:() S.It is then not hard to show that the above functor sends each operationμ∈(c_1,⋯,c_n;c)toK^⋆(μ) := |K(c_1,⋯,c_n;c)|,i.e. the underlying space of the connective dgk-moduleK(c_1,⋯,c_n;c). The following is the dg version of (<ref>). There is a weak equivalence of spaces limK^⋆≃^_(ℙ)(ℙ^, K)We have in fact a chain ofweak equivalences ^_()(^, K) ≃^_(k[])(k[]^, Ł^_k[](K)) ≃^_(ℙ)(ℙ^, K). Indeed, the first weak equivalence is due to the adjunction *()(k[]), whilst the second follows from Corollary <ref>. Here we note that the functor Ł^_k[] preserves weak equivalences between levelwise cofibrant objects, and hence, Ł^_k[](K) has already the right homotopy type. The proof is completed by combining the above with Proposition <ref>. Here is the example that is expected. Let A∈_ℙ(_≥0(k)) be a levelwise cofibrant object. Recall from <ref> that we have a functor _A,- : _ℙ^A (ℙ). Thus for each A-module N we obtain a functor _A,N^⋆:() S. By construction, this functor sends each operation μ∈(c_1,⋯,c_n;c) to the space _A,N^⋆(μ) = ^__≥0(k)(A(c_1)⊗⋯⊗ A(c_n), N(c)). Moreover, due to Proposition <ref> and the adjunction (-)∘_ A ⊣_A,-, we have a chain of weak equivalences lim_A,N^⋆≃^_(ℙ)(ℙ^, _A,N) ≃^__ℙ^A(A,N). The latter represents Hochschild cohomology of A as in <ref>, and hence, we obtain thatlim_A,N^⋆≃^⋆(A ; N) The most basic examples are as follows. Consider the case where =. According to the above example, for a monoid A in _≥0(k) which is cofibrant as a chain complex and for an A-bimodule N, we have a cosimplicial space _A,N^⋆:(Δ) S,[m] ↦^__≥0(k)(A^⊗ m,N)whose totalization computes the (classical) Hochschild cohomology of A with coefficients in N. Next consider the case =. Let C be a commutative monoid in _≥0(k) which is cofibrant as a chain complex, and letN be a C-module. As above, we have a functor _C,N^⋆:(_*^) S, ł m ↦̊^__≥0(k)(C^⊗ m,N). This is in fact the original version of Loday's functor for cohomology (see <cit.>). Composed with the canonical functor φ_1 : Δ_*^ (see <ref>), the above functor yields a cosimplicial space _C,N^⋆∘φ_1 :(Δ) S.Then we have lim_C,N^⋆∘φ_1 ≃_^⋆(C ; N), i.e. the Hochschild cohomology of the underlying associative monoid of C with coefficients in N (considered as a symmetric bimodule). This is due to the fact that φ_1 corresponds to the canonical functor () (). In line with the above example, it is also interesting to consider for some n∈ the following composed functor (_n) φ_n(_*^) _C,N^⋆S where φ_n comes from the obvious functor (_n)(). According to Example <ref>, the limit of _C,N^⋆∘φ_n computes the Hochschild cohomology of the underlying _n-algebra of C with coefficients in N. Thus we writelim_C,N^⋆∘φ_n ≃__n^⋆(C ; N). Moreover, as shown in[<cit.>, 6.1], __n^⋆(C ; N) can be alternatively modeled by the totalization of the cosimplicial space(Δ)(^n)^∨(_*^) _C,N^⋆S where (^n)^∨ denotes the dual object of the pointed n-sphere considered as a functor ^n : Δ^_*. (Here we use the standard model for ^n which has only two nondegenerate simplices). This matches flawlessly with the above example because φ_1 and (^1)^∨ coincide with each other. We may now think of the cosimplicial space _C,N^⋆∘(^n)^∨ as a combinatorial model for _C,N^⋆∘φ_n, in the sense that the limits of the two are weakly equivalent. Clearly the cosimplicial space is notably more accessible, and itembodies the spirit of the higher order Hochschild cohomology initiated by Pirashvili (cf. <cit.>).amsplain99HoangT. Hoang, Quillen cohomology of enriched operads, preprint arXiv:2005.01198 (2020). YonatanCotangent Y. Harpaz, J. Nuiten and M. Prasma,The abstract cotangent complex and Quillen cohomology of enriched categories, Journal of Topology. 11, p. 752-798 (2018). Hoch G. Hochschild,On the cohomology groups of an associative algebra, Annals of Mathematics, second series, 46 (1) p. 58-67 (1945) Toen B. Toën,The homotopy theory of dg-categories and derived Morita theory, Invent. Math. 167 (3), p. 615-667 (2207) Quillen D. Quillen,Homotopical algebra, Lecture Notes in Mathematics. 43 (1967). Quillen2 D. Quillen,On the (co-)homology of commutative rings, Applications of Categorical Algebra, Proc. Sympos. Pure Math. 17, New York 1968, Amer. Math. Soc., Providence, R.I., p. 65–87 (1970). Lurieha J. Lurie,Higher algebra, preprint, available at author’s homepage <http://www.math.harvard.edu/ lurie/> (2011). YonatanBundle Y. Harpaz, J. Nuiten and M. Prasma,The tangent bundle of a model category, Theory and Applications of Categories. 34, p. 1039-1072 (2019). Loday J.-L. Loday and B. Vallette,Algebraic Operads, Grundlehren Math. Wiss. 346, Springer, Heidelberg (2012). Fresse1 B. Fresse,Modules over operads and functors,Springer Lecture Notes in Mathematics 1967 (2009). Francis J. Francis,The tangent complex and Hochschild cohomology of _n-rings, Compositio Mathematica. 149, p. 430-480 (2013). Vallette S. Merkulov and B. Vallette,Deformation theory of representations of prop(erad)s. II, J. Reine Angew. Math. 636, p. 1-52 (2009). Turchin G. Arone and V. Turchin,On the rational homology of high-dimensional analogues of spaces of long knots, Geometric and Topology. 18, p. 1261-1322 (2014). Julien J. Ducoulombier, B. Fresse and V. Turchin,Projective and Reedy model category structures for (infinitesimal) bimodules over and operad, Appl. Categ. Struct. 30, p. 825–920 (2022). GutiJ. J. Gutiérrez and R. M. Vogt,A model structure for coloured operads in symmetric spectra, Mathematische Zeitschrift. 270, p. 223-239, (2012). BergerMoerdijk C. Berger and I. Moerdijk,On the derived category of an algebra over an operad,Georgian Mathematical Journal. 16(1), p. 13-28 (2009). Hovey M. Hovey,Model categories, American Mathematical Soc. 63 (1999). Spitzweck M. Spitzweck,Operads, algebras and modules in general model categories, preprint arXiv:math/0101102, (2001). Caviglia G. Caviglia,A model structure for enriched coloured operads, available at author's homepage <https://www.math.ru.nl/ gcaviglia/> (2015). White M. Batanin and D. White,Left Bousfield localization without left properness, Journal of Pure and Applied Algebra, to appear, available as arXiv:2001.03764 (2020). Hoang1T. Hoang, Operadic Dold-Kan correspondence and mapping spaces between enriched operads, preprint, arXiv:2312.07906 (2023). WY D. White, D. Yau,Homotopical adjoint lifting theorem, Applied Categorical Structures. 27, p. 385-426 (2019) Yonatan Y. Harpaz, J. Nuiten and M. Prasma, Tangent categories of algebras over operads, The Israel Journal of Mathematics. 234, p. 691-742 (2019). Rich T. Pirashvili and B. Richter,Hochschild and cyclic homology via functor homology, K-theory 25 (1), p. 39–49 (2002). Burkin S. Burkin,Twisted arrow categories, operads and Segal conditions, Theory Appl. Categ. 38, no. 16 p. 595-660 (2022). BauWi H. J. Baues and G. Wirsching,Cohomology of small categories, Journal of pure and applied algebra 38.2-3, p. 187-211 (1985) Rezk C. Rezk,Spaces of algebra structures and cohomology of operads, PhD Thesis, Massachusetts Institute of Technology, (1996). PirashT. Pirashvili,Hodge Decomposition for higher order Hochschild Homology, Ann. Sci. Ecole Norm. Sup. (4) 33, p. 151-179 (2000) Lodaycyc J.-L. Loday,Cyclic homology, Grundlehren Math.Wiss. 301, Springer (1998). BK A. K. Bousfield and D. M. Kan,Homotopy limits, completions and localizations, Lecture Notes in Mathematics. 304, Springer-Verlag, Berlin-New York, (1972). Schwede S. Schwede and B. Shipley,Equivalences of monoidal model categories, Algebraic and Geometric Topology. 3, p. 287-334 (2003). Robinson1 A. Robinson,Gamma homology, Lie representations and _∞ multiplications Invent. Math. 152, no. 2, p. 331-348 (2003). Leech J. Leech, ℋ-coextensions of monoids, Memoirs of the American Mathematical Association. 157, p. 1-66 (1975) Agrawalla B. Agrawalla, N. Khlaif and H. MillerHarrison homology and the Quillen cohomology of commutative monoids, preprint arXiv:2211.01536 (2022) Luriehtt J. Lurie,Higher topos theory, Annals of Mathematics Studies. 170, Princeton University Press (2009). Loday1 J.-L. Loday, Opérations sur l’homologie des algébres commutatives, Invent. Math. 96, no. 1, p. 205-230 (1989) Ginot G. Ginot, T. Tradler, and M. ZeinalianHigher Hochschild cohomology, Brane topology and centralizers of _n-algebra maps, preprint arXiv:1205.7056 (2012) | http://arxiv.org/abs/2312.16504v1 | {
"authors": [
"Hoang Truong"
],
"categories": [
"math.AT",
"math.GT",
"55P42, 18M60, 18N60, 18M75"
],
"primary_category": "math.AT",
"published": "20231227101616",
"title": "Hochschild and cotangent complexes of operadic algebras"
} |
Department of Physics, Tokyo Institute of Technology, Meguro, Tokyo 152-8551, JapanDepartment of Physics, Tokyo Institute of Technology, Meguro, Tokyo 152-8551, JapanDepartment of Physics, Tokyo Institute of Technology, Meguro, Tokyo 152-8551, JapanDepartment of Physics, Tokyo Institute of Technology, Meguro, Tokyo 152-8551, Japan [email protected] Department of Physics, Tokyo Institute of Technology, Meguro, Tokyo 152-8551, Japan Quantum Research Center for Chirality, Institute for Molecular Science, Okazaki, Aichi 444-8585, JapanWe performed time-resolved pump–probe measurements using rare-earth iron garnet Gd3/2Yb1/2BiFe5O12 as a two-sublattice ferrimagnet. We measured the initial phases of the magnetic resonance modes below and above the magnetization compensation temperature to clarify the sublattice selectivity of the inverse Faraday effect in ferrimagnets. A comparison of the time evolution of magnetization estimated using the equations of motion revealed that the inverse Faraday effect occurring in ferrimagnetic materials has sublattice selectivity. This is in striking contrast to antiferromagnets, in which the inverse Faraday effect acts on each sublattice identically. The initial phase analysis can be applied to other ferrimagnets with compensation temperatures. Sublattice-selective inverse Faraday effect in ferrimagnetic rare-earth iron garnet Takuya Satoh January 14, 2024 =================================================================================== The ultrafast control of magnetic materials using light pulses has attracted considerable interest over the years. In magnetic insulators, the inverse Faraday effect (IFE) and Faraday effect (FE) have been applied to excite and detect magnetization dynamics, respectively.<cit.> The IFE, which is the reverse of the magneto-optical FE, was theoretically proposed by Pitaevskii<cit.> and Pershan <cit.> and demonstrated by van der Ziel et al.<cit.> Ultrafast magnetization control has been demonstrated in weak ferromagnets, such as DyFeO3<cit.> and FeBO3,<cit.> and pure antiferromagnets, such as NiO,<cit.> via the IFE using femtosecond light pulses. Here, an effective magnetic field pulse 𝐇_IFE was induced by a circularly polarized light pulse. The direction of 𝐇_IFE is determined by the helicity of the circularly polarized light. The impulsive 𝐇_IFE acts on the sublattice magnetization of transition metal ions (e.g., Fe^3+ ions in DyFeO3 and FeBO3 and Ni^2+ ions in NiO), followed by precessionas free induction decay. In this theory, the magnitude and direction of 𝐇_IFE acting on each sublattice are identical when 𝐇_IFE is perpendicular to the sublattice magnetizations.<cit.>Ferrimagnets consist of magnetic sublattices aligned in opposite directions with different magnetic ions of different magnitudes, such that the vector sum is not zero. Thus, ferrimagnets exhibit a net magnetization in the ground state. Strong antiferromagnetic exchange interactions occur between the two magnetic sublattices. Accordingly, the magnetic resonance in ferrimagnets has two modes: the ferromagnetic resonance (FMR) mode (GHz range), which is similar to that in ferromagnets, and the exchange resonance mode (sub-THz range), which is unique to ferrimagnets.<cit.> Furthermore, when the temperature dependence of the sublattice magnetization differs, the net magnetization can vanish at the magnetization compensation temperature T_M. Magnetization dynamics in ferrimagnets have been actively studied in the field of ferrimagnetic spintronics, which combines controllability, similar to ferromagnetism, with ultrafastness, similar to antiferromagnetism.<cit.> Ultrafast IFE using circularly polarized light pulses has been reported in ferrimagnets.<cit.> However, the magnetic field and temperature dependence of the initial phase were not explicitly discussed; moreover, whether the IFE generated 𝐇_IFE in the opposite or same direction for each sublattice has not been studied. We attempt to clarify this issue by measuring the initial phase of the coherently excited resonance modes using a two-sublattice rare-earth (RE) iron garnet. The bismuth-doped RE iron garnet Gd3/2Yb1/2BiFe5O12 (GdYb-BIG) is a ferrimagnet with a Curie temperature of 573 K and T_M = 96 K, which was obtained from the temperature dependence of magnetization and the sign change of Faraday rotation.<cit.> The exchange interaction between Fe ions in the tetragonal and octahedral sites is much stronger than that between Fe and RE ions. Therefore, in the sub-THz frequency range, one can regard the two magnetic sublattices to be composed of Fe magnetization 𝐌^Fe and RE magnetization 𝐌^RE (Ref. <cit.>).We used a GdYb-BIG single crystal with a (111) plane orientation and a thickness of 140 μm, which was grown by the liquid-phase epitaxy method.As shown in Fig. <ref>, we performed time-resolved pump–probe measurements in transmission geometry. The polarization of pump pulse (wavelength 1300 nm, pulse energy 4 μJ, repetition rate 500 Hz, pulse width 60 fs, spot diameter 280 μm) was circular (σ^±) to excite the magnetic resonance modes via the IFE. The Faraday rotation angle of the linearly polarized probe pulse (wavelength 800 nm, pulse energy less than 10 nJ, repetition rate 1 kHz, pulse width 60 fs, spot diameter 80 μm) is mainly sensitive to the out-of-plane component M_z^Fe over the entire temperature range.<cit.> We applied an external in-plane magnetic field H_0 = 2 kOe in the positive and negative directions, which was sufficient to align 𝐌^Fe and 𝐌^RE in the plane, resulting in a single-domain structure below 75 K and above 130 K. Thus, we measured the dependences of the magnetization dynamics on the helicity (σ^±) of the circular pump polarization, the direction of the external magnetic field (± H_0), and the temperature (T ≷ T_M). The direction of 𝐇_IFE can be controlled by σ^±; the directions of 𝐌^Fe and 𝐌^RE can be controlled by ± H_0; and the relative magnitudes of 𝐌^Fe and 𝐌^RE can be controlled by T.Figures <ref>(a)–<ref>(d) and <ref>(a)–<ref>(d) show the Faraday rotation of the probe as a function of delay t at T=300 K (>T_M) and 60 K (<T_M), respectively, in the external magnetic fields +H_0 [(a): t ≤ 20 ps, (c): t ≤ 300 ps] and -H_0 [(b): t ≤ 20 ps, (d): t ≤ 300 ps], together with fitting with damped sinusoidal functions sin(ω_HF t)exp(-α_HFω_HF t) and sin(ω_LF t)exp(-α_LFω_LF t). At 300 K, a high-frequency (HF) mode at 403 GHz and a low-frequency (LF) mode at 5.8 GHz were observed.At 60 K, an HF mode at 222 GHz and an LF mode at 10.7 GHz were observed. The HF and LF modes were attributed to the exchange resonance and spatially propagating FMR (magnetostatic) modes, respectively.<cit.> From Figs. <ref>(a)–<ref>(d) and <ref>(a)–<ref>(d), we observe that the initial phases of the sinusoidal functions [sin(ω t) or -sin(ω t)] of the LF and HF modes do not depend on the temperature or the direction of the external field. In contrast, the pump helicity changed the phases of both modes by 180^∘. The results are summarized in Table <ref>. The peak at approximately 6–7 ps in Figs. <ref>(a) and <ref>(b) is due to the reflection of the pump pulse from the second face of the sample.<cit.>It should be noted that for T<T_M, measurements were made at 40 and 50 K in addition to 60 K, with qualitatively the same results. For T<40 K, the analysis is complicated by the contribution of Yb ions. For T_M<T, measurements were performed at 140–300 K and similar results were obtained. For 75 K<T<130 K, the applied field was not sufficient to align the magnetization in-plane, and the two modes could not be excited. To understand the initial phases, two cases can be considered for the sublattice selectivity of the IFE. The directions of 𝐇_IFE^Fe acting on 𝐌^Fe and 𝐇_IFE^RE acting on 𝐌^RE are opposite (Case 1 in Fig. <ref>) and the same (Case 2 in Fig. <ref>), respectively.In each case, 𝐌^Fe and 𝐌^RE rotate instantaneously according to the following equations of motion under the impulsive actions of 𝐇_IFE^Fe and 𝐇_IFE^RE, respectively.d𝐌^Fe/dt =-γ𝐌^Fe×𝐇_IFE^Fe,d𝐌^RE/dt =-γ𝐌^RE×𝐇_IFE^RE.Here, γ is the gyromagnetic ratio, which is equal for Gd and Fe ions. Even if they were different, the qualitative argument in this paper would still hold. Equations (1) and (2) are valid only during laser-pulse excitation, where the interactions between the sublattices, magnetic anisotropy field, external magnetic field, and damping can be neglected.<cit.> After the pulses of 𝐇_IFE^Fe and 𝐇_IFE^RE disappear, 𝐌^Fe and 𝐌^RE continue to rotate as a superposition of the counterclockwise LF and clockwise HF modes around the H_0 axis, which determine the initial phases of the two modes in Figs. <ref>(a)–<ref>(d) and <ref>(a)–<ref>(d). Let the two sublattice magnetizations be 𝐌_1 and 𝐌_2, where |𝐌_1|>|𝐌_2|. 𝐌_1 points toward the external magnetic field. At 300 K, 𝐌_1 = 𝐌^Fe and 𝐌_2 = 𝐌^RE. At 60 K, 𝐌_1=𝐌^RE and 𝐌_2 = 𝐌^Fe.In Case 1, let a and b be the in-plane displacements of 𝐌_1 and 𝐌_2 in the LF mode, respectively. The ratio a/b can be approximated as |𝐌_1|/|𝐌_2|. In the HF mode, the in-plane displacements of 𝐌_1 and 𝐌_2 are regarded as identical and are denoted by c.<cit.> The in-plane displacements of 𝐌_1 and 𝐌_2 can be represented by the superposition of the LF and HF modes if a ≥ c≥ b holds true. In special cases where 𝐇_IFE is generated only in 𝐌_1 or 𝐌_2, a > b = c or a = c > b hold true, respectively. The dependences of the time evolution of M_z^Fe on the pump helicity, the direction of the external magnetic field, and the temperature were consistent with the experimental results, as presented in Table <ref>.In Case 2, the time evolution of M_z^Fe differs depending on whether the dominant motion is in the LF mode (Case 2a) or HF mode (Case 2b), because the sense of rotation is opposite between the two modes. In Case 2a [Fig. <ref>(a)], the M_z^Fe for the LF mode can be determined, but not that for the HF mode. In Case 2b [Fig. <ref>(b)], the M_z^Fe for the HF mode can be determined, but not that for the LF mode. The time evolutions of M_z^Fe are presented in Tables <ref> and <ref> for Cases 2a and 2b, respectively; however, they do not match the experimental results in Table <ref>.We conclude that the IFE inGdYb-BIG operates in opposite directions for each sublattice (Case 1). In special cases, 𝐇_IFE is generated only in 𝐌^Fe or 𝐌^RE. This is consistent with the sublattice selectivity of the FE, in which the signs of the contributions of Fe and RE ions to the FE are opposite and the Fe ion's contribution is dominant in the near-infrared range.<cit.> This is in contrast to antiferromagnets, in which the magnitude and direction of 𝐇_IFE acting on each sublattice are identical.<cit.> The initial phase analysis can be applied to other ferrimagnets by measuring below and above the compensation temperatures. A clarification of the sublattice selectivity of the IFE is expected to promote the development of devices that utilize ferrimagnetic materials and magnetooptical effects, such as the study of magnetization reversal using the IFE.<cit.> We are grateful to Kouki Mikuni for his technical assistance. This study was financially supported by the Japan Society for the Promotion of Science KAKENHI (grant Nos. JP19H01828, JP19H05618, JP21H01032, JP22H01154, and JP22K14588),the Frontier Photonic Sciences Project (NINS grant Nos. 01212002 and 01213004),and OML Project (NINS grant No. OML012301) of the National Institutes of Natural Sciences (NINS),and MEXT Initiative to Establish NeXt-generation Novel Integrated Circuits CenterS (X-NICS) (grant No. JPJ011438). apsrev4-2 | http://arxiv.org/abs/2312.16553v1 | {
"authors": [
"Toshiki Hiraoka",
"Ryo Kainuma",
"Keita Matsumoto",
"Kihiro T. Yamada",
"Takuya Satoh"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231227123639",
"title": "Sublattice-selective inverse Faraday effect in ferrimagnetic rare-earth iron garnet"
} |
[email protected] Instituto de Física, Universidade Federal da Bahia, Campus Universitário de Ondina, 40170-115, Salvador, BA, Brazil [email protected] Centro das Ciências Exatas e das Tecnologias, Universidade Federal do Oeste da Bahia, 47808-021, Barreiras, BA, Brazil [email protected] Instituto de Física, Universidade Federal da Bahia, Campus Universitário de Ondina, 40170-115, Salvador, BA, Brazil [email protected] Instituto de Física, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21941-972, RJ, Brazil [email protected] Instituto de Física, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21941-972, RJ, Brazil In this work, we use the Shannon informational entropies to study an electron confined in a double quantum dot. We obtain the values of the entropies S_r, S_p and S_t as a function of the parameters A_2 and k of the harmonic-Gaussian symmetric double quantum well potential function. We conjecture that the entropy S_r maps the degeneracy of states when we vary A_2 and also is an indicator of the level of decoupling/coupling of the double quantum dot. We study the quantities S_r and S_p as measures of delocalization/localization of the probability distribution. Furthermore, we analyze the behaviors of the quantities S_p and S_t as a function of A_2 and k. Finally, we carried out an energy analysis and, when possible, compared our results with work published in the literature.Keywords: Shannon Informational Entropies; Double Quantum Dot; Harmonic-Gaussian Symmetric Double Quantum Well Potential Function.Electron Confinement study in a double quantum dot by means of Shannon Entropy Information. Ginette Jalbert January 14, 2024 ===========================================================================================§ INTRODUCTION Quantum systems under confinement conditions present notable properties and are studied over a wide range of situations <cit.>. From the theoretical point of view one finds different approaches and methods to treat these systems, by computing the electronic structure of atoms, ions and molecules under the confinement of a phenomenological external potential in the presence (or not) of external fields. For instance, analytical approximations for two electrons in the presence of an uniform magnetic field under the influence of a harmonic confinement potential representing a single quantum dot (QD) <cit.>, or a quartic one corresponding to a double QD <cit.>; in this last case the presence of an additional laser field is done through the electron effective mass <cit.>; or different numerical methods of calculation related to the concern with the accuracy ofdescribing the electron-electron interaction in (artificial) atoms, molecules and nanostructures such as the Hartree approximation <cit.>,the Hartree-Fock computation <cit.> or the full configuration interaction method (Full CI) <cit.> among others. Besides, initially the study of confined quantum systems involved the study of the electronic wave function in an atom, or ion, inside a box, whose walls could be partially or not penetrable, and whose description led to the use of different phenomenological potentials <cit.>. In the case of QD's, the choice of potential profiles has usually involved a harmonic profile <cit.>, or an exponential one, to take into account the finite size of the confining potential well <cit.>; or a combination of both <cit.>. Recently, we have analyzed the behavior of two electrons in a double QD with different confinement profiles, and under the influence of an external magnetic field, aiming at interest in fundamental logical operations of quantum gates <cit.>.On the other hand, the comprehension of the properties of confined quantum systems is related to the choice of what physical quantities are computed and analysed; in the case of QDsone finds, for instance, the computation of linear and nonlinear absorption coefficients, refractive index, and harmonics generation susceptibilities <cit.>, as well as exchange coupling, electron density function and electronic spatial variance <cit.>. Although the mathematical basis of information theory was established a long time ago <cit.>, only recently informational entropy has been used as an alternative tothe study of the properties of confined quantum systems <cit.> and in particular QDs <cit.>. In this work we want to analyze how Shannon's informational entropy can contribute to the understanding of the confinement of an electron within a symmetric double parabolic-Gaussian quantum dot already used by E. Kasapoglu et al <cit.>.Throughout this paper we use atomic units and Cartesian coordinate axes. The present paper is organized as follows. In Section 2 our theoretical approach is discussed: in Sec.2.1 the concepts and methodology adopted in this work are presented, in particular the phenomenological confinement potential, whose width, height and coupling are adjusted by different parameters; and in Sec.2.2 the entropy quantities are defined for the sake of completeness. The Sec.3 is also divided in Sec.3.1 and Sec.3.2, where energy and entropies, respectively, are studied as functions of the parameters which rule the potential's height and coupling. § MODEL AND FORMULATION This section presents the concepts and methodology of the calculations used in this work.In particular, Subsection 2.1is dedicated to the presentation of the system formed by an electron confined in a double quantum dot as the physical problem of interest and in Subsection 2.2the informational quantities S_r, S_p and S_t are defined. §.§ System of interest §.§.§ Hamiltonian In the present work we study a system formed by an electron confined in a double quantum dot, whose Hamiltonian isĤ =-1/2m_c∇⃗^2+ V̂(x,y,z),where m_c is the effective electronic mass and the confinement potential function is given byV̂(x,y,z) = V̂_DQD(x) + V̂_HO(y) +V̂_HO(z).The potential function V̂_DQD(x) is defined by a harmonic-gaussian symmetric double quantum well function, so that,V̂_DQD(x) = V_0 [ A_1 x^2/k^2 + A_2 e^-(x/k)^2],with A_1>0 and A_2>A_1, where V_0 is the depth of the well and k is the parameter that relates the width of the confinement barrier. The parameters A_1 adjusts the well to the width of the barrier and A_2 the height of the internal barrier adjusting the coupling/decoupling between the wells. The potential functions V̂_HO(y) and V̂_HO(z) are defined by harmonic functions, that is,V̂_HO(y)= 1/2m_c ω^2_y y^2andV̂_HO(z)= 1/2m_c ω^2_z z^2,where the angular frequencies ω_y and ω_z indicate the confinement parameters. In the Fig. <ref> we present the graph of the confinement potential function, V̂(x,y,0), for A_1 = 0.240, A_2 = 5.000, k = 377.945 a.u., V_0 = 0.00839 a.u., m_c = 0.067 a.u. and ω_y = 1.000 a.u. In this figure, we observe the form of the potential function that confines the electron in the double quantum dot, including the infinite barriers of confinement and the internal barrier that regulates the decoupling/coupling between the two wells.We are interested here in studying the influence of the structure of the double quantum dot on the properties of the system, more precisely, when we vary the parameters A_2 and k of the potential function V̂_DQD(x). Thus, avoid excitation in the directions ŷ and ẑ and fixed the situation of spatial confinement in these directions determining the potential functions (<ref>) and (<ref>) with ω_y =1.000 a.u. and ω_z =1.000 a.u.. In Fig. <ref> we present graphs with the general behavior of the potential function V̂_DQD(x) when: (a) we vary the parameter A_2 with fixed k, A_1 and V_0 and (b) we change the values of k with fixed A_2, A_1 and V_0. From graph (a) we see that the increase in A2 values increases the level of decoupling between the two wells. Additionally, when the values of A_1 and A_2 are very close we have approximately one well in V_DQD(x). According to graph (b), the increase in k values increases the width of the confinement barrier. The minimum values of V_DQD(x) are changed according to variations in A_2 or k. For a deeper understanding of the behavior of the potential function V_DQD(x) we present in Fig. S1 of the supplementary material the situations in which we vary the parameter A_1 with k, A_2 and V_0 fixed and where we change V_0 with A_1, A_2 and k fixed.§.§.§ Wave functions and probability densities Our problem is to solve the Schödinger equationĤψ(x,y,z) = Eψ(x,y,z)adopting the Hamiltonian (<ref>). Here, we write the wave function of an electron ψ(x,y,z)= ψ_x(x)ψ_y(y)ψ_z(z)in terms of basis functions of the Cartesian anisotropic Gaussian orbitals (c-aniGTO) type centered at R⃗=(X,0, 0), that is,ψ_x(x)= ∑_μ N_μ C_μ(x-X_μ)^n_xexp[-α_μ(x-X_μ)^2] .ψ_y(y)=N_yy^n_yexp[-α_y y^2] and ψ_z(z)=N_zz^n_zexp[-α_z z^2],where N_μ, N_y and N_z are the normalization constants, C_μ are the molecular orbital coefficients obtained by diagonalization methods and X_μ is defined in ± k√(ln( A_2 / A_1)) (minimum values of V̂_DQD(x))The integers n_x, n_y and n_z allow the classification of orbitals, for example, the types s-, p-, d-, ... correspond to n = n_x + n_y + n_z= 0, 1, 2, ...., respectively. In Eq. (<ref>), as we did in previousarticles(Refs. <cit.>), we have considered two types of exponents in x, the first Gaussian exponent α_x has been obtained variationally, that is, minimizing the functional energy in x, and the second proportional to the first as being α=3/2α_x. In its turn, in Eqs. <ref> and <ref>, n_y and n_z were taken equal to 0 to avoid excitation in the directions y e z.Furthermore, taking α_y=α_z=α=1/2m_cω_y and N_y=N_z=N, we haveψ_y(y)=Nexp[-α (y)^2] , ψ_z(z)=Nexp[-α (z)^2]. The wave function ψ(p_x, p_y, p_z), in momentum space, has been obtainedthrough a Fourier transform applied to ψ(x,y,z), so that we getψ(p_x, p_y, p_z) = ψ_x(p_x) ψ_y(p_y) ψ_z(p_z) ,whereψ_x(p_x) = 1√(2π)∑_μ N_μ C_μ e^ -p_x^2/4α_μ -ip_xX_μ× ∑_k=0k even^n_μn_μk(p2iα_μ)^n_μ-kΓ(k+12)α_μ^(k+1)/2,ψ_y(p_y)= N√(2α)exp(-p_y^24α),and ψ_z(p_z)= N√(2α)exp(-p_z^24α). The probability density in the position space is defined as usual asρ(x,y,z)= ρ_x(x)ρ_y(y)ρ_z(z) = |ψ_x(x) |^2 |ψ_y(y) |^2 |ψ_z(z) |^2,and using the Eqs. (<ref>), (<ref>) and (<ref>), it yields:ρ_x(x) = |ψ_x(x) |^2 = ∑_μν N_μ N_ν C_μ^∗ C_ν (x-X_μ)^n_μ (x-X_ν)^n_ν× exp[-α_μ(x-X_μ)^2-α_ν(x-X_μ)^2] ρ_y(y)= |ψ_y(y) |^2 = N^2exp(-2α y^2),ρ_z(z)= |ψ_z(z) |^2 = N^2exp(-2α z^2).Normalizing the densities ρ_y(y) and ρ_y(y) to unity we find N^2 =√(2α / π)The probability density in momentum space is defined asγ(p_x, p_y, p_z)= γ_x(p_x)γ_y(p_y)γ_z(p_z) = |ψ_x(p_x) |^2 |ψ_y(p_y) |^2 |ψ_z(p_z) |^2where γ_x(p_x) = |ψ_x(p_x) |^2 = 12π∑_μν N_μ N_ν C_μ^∗ C_ν e^ -p_x^2/4(1/α_μ+1/α_ν) +ip_x(X_μ-X_ν)∑_k=0k even^n_μ∑_ℓ=0 ℓ even^n_ν i^n_μ+n_ν-k-ℓ (-1)^n_ν-ℓn_μkn_νℓ× (p_x2α_μ)^n_μ-k(p_x2α_ν)^n_ν-ℓΓ(k+12)α_μ^(k+1)/2Γ(ℓ+12)α_ν^(ℓ+1)/2 , γ_y(p_y)= |ψ_y(p_y) |^2 = 1√(2απ)exp(-p_y^22α) ,and γ_z(p_z)= |ψ_z(p_z) |^2 = 1√(2απ)exp(-p_z^22α). We present the details for determining of γ_x(p_x) in the supplementary material. §.§ Shannon informational entropies In the context of atomic and molecular physics, Shannon informational entropies in the space of positions, S_r, and momenta, S_p, can be written as <cit.>S_r=-∫ρ(x,y,z) ln( ρ(x,y,z) ) d^3randS_p=-∫γ(p_x,p_y,p_z) ln( γ(p_x,p_y,p_z) ) d^3p .The probability densities ρ(x,y,z) and γ(p_x,p_y,p_z) are defined as in Eqs. (<ref>) e (<ref>). Adopting ρ(x,y,z) normalized to unity the S_r entropy can be written asS_r =S_x +S_y +S_z,whereS_x=-∫ρ_x(x) ln( ρ_x(x) ) dx ,S_y=-∫ρ_y(y) ln( ρ_y(y) ) dy ,andS_z=-∫ρ_z(z) ln( ρ_z(z) ) dz.Analogously, using γ(p_x,p_y,p_z) normalized to unity the S_p entropy becomes S_p =S_p_x +S_p_y +S_p_z,whereS_p_x =-∫γ_x(p_x) ln( γ_x(p_x) ) dp_x,S_p_y =-∫γ_y(p_y) ln( γ_y(p_y) ) dp_y,andS_p_z =-∫γ_z(p_z) ln( γ_z(p_z) ) dp_z.The quantities S_r and S_p are interpreted as measures of delocalization or localization of the probability distribution <cit.>.We determine the entropies S_y and S_z analytically by replacing Eqs. (<ref>) e (<ref>) in Eqs. (<ref>) and (<ref>), respectively, so that,S_y = S_z=-12ln(2απ)+12 .Computing such results in Eq. (<ref>) we haveS_r=S_x - ln(2απ) + 1. S_x is calculated numerically using the density (<ref>) in Eq. (<ref>).Similarly, we obtain the values of S_p_y and S_p_z by substituting Eqs. (<ref>) and (<ref>) in Eqs. (<ref>) and (<ref>), so thatS_p_y = S_p_z = 1/2+1/2ln(2πα).Considering such results in Eq. (<ref>) one getsS_p= S_p_x + ln(2πα) + 1 .S_p_x is calculated numerically using the density (<ref>) in Eq. (<ref>).The sum S_t is composed of the addition of the quantities S_r and S_p which, inturn, originate the entropic uncertainty principle mathematized as <cit.>S_t = S_r+S_p=- ∫ρ(x,y,z) γ(p_x,p_y,p_z)ln[ ρ(x,y,z) γ(p_x,p_y,p_z)] d^3rd^3p ≥3(1+lnπ) .The value of S_t is limited by the relation (<ref>) which exhibits a minimum value. From the entropic uncertainty relation we can derive the Kennard uncertainty relation. More specifically, adding the Eqs. (<ref>) e (<ref>) we obtainS_t= S_x + S_p_x + 2 [ ln (π) + 1 ] ≥ 3(1+lnπ).Note that this last expression does not depend on α.Shannon entropies are dimensionless quantities from the point of view of physics. However, subtleties surround this issue since, in principle, we have quantities that have physical dimensions in the argument of the logarithmic function. For a more detailed discussion on this topic see Refs. <cit.>. § ANALYSIS AND DISCUSSION In this Section, the energy and the entropic quantities S_r, S_p and S_t determined by Eqs. (<ref>), (<ref>) and (<ref>) are discussed as a function of the parameter A_2 and, afterwards, as function of k. In the first case we have kept fixed A_1, k and V_0; and in the second case, A_1, A_2 and V_0. The specific/fixed values of the parameters in question, besides the values of m_c and V_0 are based in Ref. <cit.>.The calculations in our study are performed in atomic units (a.u.). In order to compare some of our results with those previously published in the literature we adopt in this section the parameter k in nanometer (nm) and, more specifically, we highlight the energetic contribution along the x-axis in meV. The optimized wave function was expanded into the following basis functions: on the x axis we employ orbitals of the type 2s2p2d2f2g (in total 10 functions located in each well) and on the y and z axes, 1s type orbitals. In cases where states are degenerate, symmetrization and antisymmetrization were done. §.§ Energy analysis We present in Fig. <ref> the energy curves E_n (contribution along the x-axis) as a function of the parameter A_2 ranging from 0.240 to 5.000 for the first six quantum states (n = 0 - 5) with A_1 = 0.200, k = 20.000 nmandV_0 = 228.00 meV. Inset: it is detailed the energy curves for the A_2 ranging from 0.240 to 1.050, where states initially non-degenerate become degenerate two by two as A_2 increases. In table S1 of the supplementary material you can find the energy values as a function of A_2. According to the Table S1, we have that the degeneracy for states n = 0 and n = 1 appears in the interval of 1.200 ≤ A_2 ≤ 1.400, and at A_2 = 1.300 we have E_0 = E_1 = 146.24146 meV. The degeneracies in n = 2 and n = 3 originate at values of 1.400 ≤ A_2 ≤ 1.600, and at A_2 = 1.500 we find E_2 = E_3 = 184.54417 meV. Finally, the degeneracies in n=4 and n = 5 begin between 1.800 ≤ A_2 ≤ 2.000, and at A_2 = 1.900 we have E_4 = E_5 = 230.14020 meV. Otherwise, we observe by inset of Fig. <ref> that the decrease in the values of A_2 causes the system to rely on non-degenerate states. In Fig. 4a we present the probability density curves in the position, ρ_x(x) , as a function of x for the ground state and different values of A_2. In the curves of ρ_x(x) for A_2 = 0.240 and 0.400 the state is not degenerate, in this case, the electron has the probability of being in one or both wells of the function V_DQD(x), and even above the internal barrier. In the curves of ρ_x(x) for A_2 = 1.300 and 1.400 the state is degenerate and the electron has the probability of being in only one of the wells of V_DQD(x). For completeness, in Fig. 4b we present the probability density curves in the momentum, γ_x(p_x), as a function of p for the ground state for A_2 = 0.240, 0.400, 1.300 and 1.400. In the main graph of Fig. <ref> we present the energy curves E_n (contribution along the x-axis) for the first six quantum states (n=0-5) as a function of the parameter k varying from 0.500 nm to 30.000 nm with A_1 = 0.400, A_2 = 2.000 and V_0 = 228.00 meV. Furthermore, we indicate in the insets (a) the energy curves with the parameter k varying from 0.500 nm to 3.500 nm and (b) the energy curves with k varying from 11.000 nm to 30.000 nm. In table S2 of the supplementary material the values obtained for energies as a function of k can be found.We observe in the main graph of Fig. <ref> and inset (b) that with the increase in the values of k the energies merge two by two into one, that is, E_0=E_1, E_2=E_3, E_4=E_5. By inset (a), with the decrease in the values of k and the increase in the effects of confinement, we identify the appearance of non-degenerate states, besides, we have on considerable increase in the values of E_n. In the graphs of Fig. <ref> we present the probability density curves for the ground state in the position and momentum space, ρ_x(x) (Fig. <ref>a) and γ_x(p_x) (Fig. <ref>b), respectively, for different values of k. In both cases, ρ_x(x) and γ_x(p_x), for the value of k = 0.500 nm the ground state is non-degenerate, otherwise, for k = 17.000 nm, 19.000 nm and 30.000 nm the state is degenerate. In this way, we perceive changes in the shape of the probability distributionswhen the values of k imply or not degeneracy in energies. As long as comparison has been possible, we have obtained a good agreement with the values found in Ref. <cit.> regarding the energy as a function of A_2 and k. As in the present work, Duque et al have also used matrix diagonalization methods; however, they have adopted an expanded wave function in terms of orthonormal trigonometric functions.§.§ Informational Analysis In the main graph of Fig. <ref> we present the curves of Shannon entropyin the space of positions for the first six quantum states (n=0-5), S^n_r, as a function of the parameter A_2 ranging from 0.240 to 5.000 with A_1 = 0.400,A_2 = 2.000andV_0 =228.00 meV.The inset details the curves of S^n_r in the range of A_2=0.240 to 1.100. In Table S3 of the supplementary material we provide the values of S^n_r as a function of A_2.According to Fig. <ref> we notice that the values of the Shannon entropy curves S^n_r get closer two by two as A_2 increases, i.e., S^0_r → S^1_r, S^2_r → S^3_r and S^4_r → S^5_r. In Table S3, we identified that the degeneracy of S^n_r for states n=0 and n=1 appears in the values of A_2 comprised between 1.300 ≤ A_2 ≤ 1.500, and at A_2= 1.400 we have S^0_r=S^1_r= 11.35961. The degeneracy in S^n_r for states n=2 and n=3 becomes evident in the interval 1.400 ≤ A_2 ≤ 1.600, and at A_2= 1.500 we find S^2_r=S ^3_r= 11.63816. Finally, although at A_2= 1.750 we already have S^4_r=S^5_r= 11.78527, the degeneracy in S^n_r for n=4 and n=5 appears in the values between 1.800 ≤ A_2 ≤ 2.000, and at A_2= 1.900 we have the identical value of S^4_r=S^5_r= 11.77304.In Table <ref> we present the regions of A_2 where the degeneracies in energy and entropy S^n_r for states n=0-5 originate. For states n=0 and n=1, the range of values of A_2 where this occurs coincides reasonably with the range of values of A_2 where the degeneracy in S^n_r originates. For states n=2 and n=3, and also for n=4 and n=5, the set of values of A_2, where the degeneracy in energies and in S_r originate, are identical. In this way, we conjecture that the Shannon entropy in the space of positions, S^n_r, can successfully map the degeneracy of states when we vary the values of A_2 in the double quantum dot studied.By analyzing the inset of Fig. <ref> we see that as A_2 decreases, the values of S^n_r increase until they reach a maximum value and, from that point on, they decrease again. The oscillations for the values of S^n_r can be justified taking into account that the information entropies reflect a measure of the delocalization/localizationof ρ_x(x). For example, for the ground state, according to Table S3, we highlight that: at A_2=1.300 we have S_r= 11.37033, at A_2= 0.400 we have a maximum value of S_r= 11.71876 and, finally, at A_2= 0.240 we find S_r= 11.36528. These values of S^n_r agree with Fig. <ref>(a), since the delocalization of the green curve is smaller than that of the red curve which, in turn, is greater than that of the black curve. Similar analyzes can be undertaken for other states. Taking the state n=0 in Fig. <ref>, starting from A_2=0.240, where S^0_r=11.36528, the values of S^0_r increase up to the maximum value of S^ 0_r=11.71876 in A_2=0.400. From this point of maximum entropy, the values of S^0_r decrease, passing through A_2=1.300 where S_r=11.37033. We observe in Fig. <ref>(a) that it is precisely at A_2=0.400 that the internal barrier of V_DQD(x) begins to influence the decoupling between the two wells. Still, from Fig. <ref>(a), in A_2=1.300 the internal barrier of the function V_DQD(x) is quite consolidated, strongly favoring the decoupling between the two wells of the function. In this way, we conjecture that the information entropy by means of S^0_r is an indicator of the level of decoupling/coupling of the double quantum dot studied.In the graph in Fig. <ref> we present for the first six quantum states (n=0-5) the Shannon entropy curves in the space of positions, S^n_r, as a function of the parameter k varying from 0.500 nm to 30.000 nm with A_1 = 0.400,A_2 = 2.000andV_0 = 228.00 meV. In the inset we highlight S^n_r in the range from k=0.500 nm to 5.000 nm. In table S4 of the supplementary material we provide the values of S^n_r as a function of k.According to the main graph of the Fig. <ref> when k tends to infinity degeneracy arises in the values of S_r such that S^0_r=S^1_r, S^2_r=S^3_r and S^4_r=S^5_r. An increase in the values of k widens the barriers of the potential function V_DQD(x) reducing the effects of confinement. In this situation, the uncertainty in determining the location of the electron increases and, consequently, we identify an increase in the values of S_r. In fact, as k increases the delocalization in ρ_x(x) increases according to Fig. <ref> (a).On the other hand, we observe in the inset of Fig. <ref> that as k decreases there is a breakin the degeneracy of S_r. Furthermore, the decrease in k generates a narrowing in the barriers of the potential function V_DQD(x). In this case, there is an increase in the confinement situation and, consequently, a decrease in uncertainty in the location of the electron, causing S_r values to decrease. Here, the delocalization in ρ_x(x) decreases according to Fig. <ref>(a).In Fig. <ref> we display for the first six quantum states (n=0-5) the Shannon entropy curves in the momentum space, S^n_p, as a function of the parameter A_2 varying from 0.240 to 5.000 with A_1 = 0.400, A_2 = 2.000 and V_0 = 228.00 meV. In Table S5 of the supplementary material we provide the values of S^n_p as a function of A_2.We have from Fig. <ref> and according to Table S5 that when A_2 increases occurs that S^0_p → S^1_p. We did not identify degeneration in S^2-5_p. In general, with the decrease in A_2 the values of S^n_p also decrease until they reach a minimum value, then increase again. Similar to what we did for S^n_r the oscillatory behavior of the values of S^n_p can be explained based in the curves of γ_x(p_x) in the Fig. <ref>. In Fig. <ref> we present for the first six quantum states (n=0-5) the Shannon entropy curves in momentum space, S^n_p, as a function of the parameter k varying from 0.500 nm up to 30.000 nm with A_1 = 0.400, A_2 = 2.000 and V_0 = 228.00 meV. In Table S6 of the supplementary material we havethe values of S^n_p as a function of k. According to the Fig. <ref> and Table S6 when the values of k increase we have degeneracy in S^0_p and S^1_p. We did not identify degeneration in S^2-5_p. On the other hand, when k decrease the values of S^n_p increases. The behavior of the curve for n = 4 shows intriguing oscillations. All values of S^n_p obtained in this work are negative as can be seen in Tables S5 e S6 of the supplementary material. This result has an explanation in the quantum context <cit.>, that is, when the limits of confinement are very small, the probability density becomes large and ρ (x,y,z) > 1. In this situation, -ρ(x,y,z) ln (ρ(x,y,z)) < 0 and so S_p (or S_r) can be negative. The original work by Shannon <cit.> also indicates the possibility of obtaining negative values for informational entropy when working with continuous distributions.In Fig. <ref> we present for the first six quantum states (n=0-5) the entropic sum curves S^n_t as a function of the parameter A_2 varying from 0.240 to 5.000 with A_1 = 0.400,A_2 = 2.000andV_0 = 228.00 meV . In Table S7 of the supplementary material we provide the values of S^n_t as a function of A_2. From Fig. <ref> and Table S7, as A_2 increases we have S^0_t → S^1_t. We did not identify degeneracy in S^2-5_t. Furthermore, when A_2 tends to infinity S^n_t tends to constant values. More specifically, the values of S^0_t=6.44814 for A_2=0.240 and A_1=0.200 (see green curve in Fig. <ref>) is a value approximately equal to three times the value of the entropy sum for the one-dimensional harmonic oscillator in the ground state presented in Ref. <cit.>.We identify in Fig. <ref> oscillations in the S^n_t curves with the occurrence of maximum and minimum values for states n=2-5. An elegant explanation for the extreme values of the entropic sum is presented in Ref. <cit.>, that is, in general, the derivative of S_t with respect to A_2 is given by ∂ S_t/∂ A_2 = ∂ S_r/∂ A_2 + ∂ S_p/∂ A_2. Since the extreme points in the curves occur when ∂ S_t/∂ A_2=0 with the absolute values |∂ S_r/∂ A_2| and |∂ S_p/∂ A_2| being equal.The Fig. <ref> displays for the first six quantum states (n=0-5) the entropic sum curves S^n_t as a function of the parameter k ranging from 0.500 nm to 30.000 nm with A_1 = 0.400,A_2 = 2.000andV_0 = 228.00 meV. In table S8 of the supplementary material we provide the values of S^n_t as a function of k.As shown in Fig. <ref> as k grows, we see a degeneracy of S^0_t and S^1_t. For S^2-5_t we did not identify degeneration. When k tends to infinity, the values of S^n_t tend to constant values. With the decrease in the values of k, oscillations appear in the curves of S^2-5_t, such oscillations can be analyzed in a similar way as we presented previously, that is, taking into account the derivative of S_t in relation to k.The values of S_r and S_p, whether as a function of A_2 or k, are compatible with the entropic uncertainty relationship. Furthermore, all values of S_t obtained in this work (see Tables S7 and S8) are above the minimum value of 6.43419 that the entropic sum can assume according to the relation <ref>.§ CONCLUSIONWe have studied the electronic confinement in a double quantum dot using Shannon informational entropies. The confinement potential V̂(x,y,z) has been described phenomenologically by using a 3D harmonic-gaussian function representing a double quantum dot symmetric in the x direction, and with a harmonic profile in the y and z directions.In particular, we have varied the parameters A_2 and k, which are related to the height and the width of the confinement potential internal barrier, respectively. We have initially established the energetic contribution along the x direction for the first six quantum states of the system (n=0-5). We have analyzed the values of the parameter A_2 for which the energy values correspond to the degenerate and non-degenerate states. Regarding the k parameter, we have highlighted the considerable increase in energy values when the values of this quantity tend to zero, increasing the confinement effects on the electron. As long as comparison was possible, we have obtained a good agreement with the values of energy as a function of A_2 and k found in the literature.We have obtained the entropy values S^n_r as a function of A_2 and k for the quantum states n=0-5. In the first situation, we conjecture that the entropy S^n_r successfully maps the degeneration of states when we vary the coupling parameter A_2. We also conclude that information entropy, through S^0_r, is an indicator of the level of decoupling/coupling of the double quantum dot. Furthermore, taking into account that informational entropies are used as a measure of delocalization/localization of ρ_x(x), we justify the fluctuations in the values of S^n_r as a function of A_2 and present the study of the values of S^n_r as a function of k.In addition to the informational analysis, we have determined the values of S^n_p and S^n_t as functions of A_2 and k. In this treatment, analyzing trends and, through the derivative of S^n_t, we focus on general aspects of the behavior of the values obtained. Additionally, we conclude that all values obtained for S^n_t respect the entropic uncertainty relationship. In future work we shall delve deeper into the physical explanations about the behavior of the values of S^n_p and S^n_t as a function of A_2 and k.Finally, from another perspective of work, we shall also use Shannon's informational entropies to analyze an electron confined in a double quantum dot, however, this time subjected to external fields. Supporting InformationSupporting Information is available from the authors. AcknowledgementsThe authors acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento Pessoal de Nível Superior (CAPES) for the financial support. Conflict of Interest The authors declare no conflict of interest. iopart-num | http://arxiv.org/abs/2312.16093v1 | {
"authors": [
"Wallas S. Nascimento",
"Angelo M. Maniero",
"Frederico V. Prudente",
"Carlos R. de Carvalho",
"Ginette Jalbert"
],
"categories": [
"physics.atom-ph"
],
"primary_category": "physics.atom-ph",
"published": "20231226153600",
"title": "Electron Confinement study in a double quantum dot by means of Shannon Entropy Information"
} |
theoremTheorem[section] proposition[theorem]Proposition definition[theorem]Definition corollary[theorem]Corollary example[theorem]Example exam[theorem]Example remark[theorem]Remark lemma[theorem]Lemma statement[theorem]Statement kernel span | http://arxiv.org/abs/2312.16076v1 | {
"authors": [
"Amrita Mandal",
"Ujjwal Sen"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231226145335",
"title": "Stronger resilience to disorder in 2D quantum walks than in 1D"
} |
Autonomous Driving using Residual Sensor Fusion and Deep Reinforcement Learning Amin Jalal Aghdasian, Amirhossein Heydarian Ardakani, Kianoush Aqabakee, Farzaneh Abdollahi [email protected], [email protected], [email protected], [email protected] of Electrical Engineering Amirkabir University of Technology (Tehran Polytechnic)Tehran, IranJanuary 14, 2024 ======================================================================================================================================================================================================================================================================================================================= This paper proposes a novel approach by integrating sensor fusion with deep reinforcement learning, specifically the Soft Actor-Critic (SAC) algorithm, to develop an optimal control policy for self-driving cars. Our system employs a two-branch fusion method for vehicle image and tracking sensor data, leveraging the strengths of residual structures and identity mapping to enhance agent training. Through comprehensive comparisons, we demonstrate the efficacy of information fusion and establish the superiority of our selected algorithm over alternative approaches. Our work advances the field of autonomous driving and demonstrates the potential of reinforcement learning in enabling intelligent vehicle decision-making.Autonomous Driving, Sensor Fusion, Deep Reinforcement Learning, Soft Actor-Critic, CARLA § INTRODUCTION In recent years, Autonomous Driving (AD) has become a tangible reality, and as this technology progresses, it is imperative to ensure the ongoing safety and reliability of this transportation mode. With the potential to significantly decrease accidents stemming from human error, AD emerges as a promising solution for enhancing road safety. According to data from the National Highway Traffic Safety Administration (NHTSA), a staggering 94% of all traffic accidents can be attributed to human error or its associated factors. AD systems come equipped with advanced safety features like collision avoidance systems and sophisticated sensors capable of preemptively averting accidents. These safety mechanisms consistently monitor the vehicle surroundings and can react much more swiftly than a human driver when confronted with unexpected situations <cit.>. AD relies on a variety of control strategies, including classic controllers such as PID <cit.>, MPC <cit.>, LQR <cit.>, and Artificial Intelligence (AI) controllers such as imitation learning based control <cit.> and RL based control. PID control is simple but requires tuning, while MPC handles complex scenarios. LQR control optimizes a cost function efficiently, but it requires accurate knowledge of system dynamics and may need to be redesigned for significant changes <cit.>. Model-free approaches offer distinct advantages over model-based methods in various contexts. They shine in complex and dynamic environments where precise modeling is challenging, providing adaptability to uncertain or evolving dynamics. Their ease of implementation and reduced need for manual tuning make them appealing for practical applications. Model-free reinforcement learning, in particular, stands out for its ability to continuously learn and adapt, achieving impressive performance across diverse domains. These methods are well-suited for scenarios where modeling is impractical or resource-intensive and excel in handling novel or unforeseen situations without the constraints of fixed assumptions. Imitation learning utilizes expert demonstrations but may encounter errors when faced with unseen events or situations. In contrast, RL control offers adaptability, continuous learning, and the ability to handle diverse conditions <cit.>.Deep reinforcement learning (DRL) represents a subset of machine learning methodologies in which an agent acquires decision-making capabilities through interactions with its environment <cit.>. Within AD, RL has found applications in developing driving policies <cit.>. These algorithms have successfully tackled complex problems such as Markov Decision Problems (MDPs). However, as highlighted earlier, it can be challenging for the algorithm to learn from all possible states and determine the optimal driving path. <cit.> employs a Deep Q-Network (DQN) and Long-Short-Term Memory (LSTM) architecture, along with a novel observation input, to enable autonomous agents to navigate intricate scenarios autonomously and <cit.> employs DQN and Deep Deterministic Policy Gradient (DDPG) algorithms to train autonomous vehicle control models within a realistic simulator. DQN and DDPG achieve the desired control objectives, with DDPG demonstrating superior performance since it is suitable for continuous tasks. In <cit.>, It was developed two approaches: pre-training Soft Actor-Critic (SAC) with Learning from Demonstrations (LfD) and an online combination of SAC, LfD, andLearning from Interventions (LfI).Given the recent advances in DRL algorithms for self-driving cars, enhancing their performance can be achieved by incorporating more complex methods for data utilization in the network structures.Sensor fusion enables AD systems to perceive and understand the surrounding environment accurately. This technology integrates data from diverse sensors, aligning them to generate different representations of the vehicle's surroundings <cit.>. This fusion of information not only enhances the robustness and reliability of the perception system in various driving scenarios <cit.> but also helps in mitigating the limitations of individual sensors, thus improving the overall performance of autonomous vehicles <cit.>. In the context of RL, sensor fusion becomes even more crucial. RL-based control systems learn to make decisions by interacting with the environment, and the quality of these decisions depends on perceived information quality <cit.>. With the integrated and comprehensive environmental models provided by sensor fusion, RL algorithms can better understand the state of the environment, resulting in more accurate predictions and safer decision-making <cit.>. Moreover, incorporating multi-modal fusion into DRL frameworks has demonstrated significant potential in enhancing decision-making and reliability in complex environments <cit.>. This study presents a novel vehicular control methodology that leverages sensor data fusion and reinforcement learning. Our approach incorporates a sensor fusion module consisting of a dual branch architecture, effectively combining information from the vehicle image and tracking sensors. We employed residual blocks within this module to enhance feature extraction and representation learning. By integrating the Soft Actor-Critic (SAC) with the sensor fusion technique, our proposed method unifies the outputs of image sensors with the vehicle speed and position, resulting in an optimal structure for generating control commands and faster convergence. To evaluate the effectiveness of our algorithm, we conducted comprehensive training and validation procedures using the CARLA simulation platform. This platform offers a realistic environment to assess our algorithm capabilities and simulate real-world driving scenarios. Furthermore, this paper includes an extensive comparative analysis, highlighting the potential advantages of our approach compared to existing state-of-the-art methods in the field. The paper is structured into five sections. Section 1 provides an introduction, offering insights into autonomous driving approaches. Section 2 delves into the background information. Section 3 outlines the proposed methodology, highlighting the essential aspects. The results and discussion section, Section 4, presents the research findings. Finally, Section 5, the conclusion, summarizes the paper and suggests potential future research directions.§ SOFT ACTOR-CRITIC (SAC) <CIT.> Soft Actor-Critic (SAC) represents an off-policy DRL algorithm that delivers efficient learning while preserving the advantages associated with entropy maximization and stability <cit.>. SAC bridges between stochastic policy optimization and DDPG <cit.> methods. While not a direct successor to Twin Delayed DDPG <cit.>, it utilizes the clipped double-Q technique and benefits from target policy smoothing due to its inherent stochasticity. SAC emphasizes entropy regularization, wherein the policy seeks a balance between expected return and policy randomness. This approach enhances exploration, aiding later learning stages, and guards against premature convergence to suboptimal solutions.To delve into SAC, we must initially introduce the context of entropy-regularized reinforcement learning. In this framework, value functions are defined by slightly different equations. The entropy H is determined as follows:H(P) = E_x ∼ P[-log P(x)]Where x is a random variable with a probability mass or density function P. In entropy-regularized reinforcement learning, the agent receives an additional reward at each time step proportionate to the policy entropy for that specific time step. This adjustment transforms the RL goal, which is the reward maximization problem, into:π^* = max_π[E_τ∼π[∑_t=0^∞γ^t (R(s_t, a_t, s_t+1) + α H(π(·| s_t)))]] In (<ref>), a transition is denoted by (s_t, a_t, r_t, s_t+1, a_t+1), including state, action, and reward. α represents the trade-off coefficient, γ is discount factor and π(·| s_t) is policy in state s_t . We consider an infinite-horizon discounted. Under this assumption, we introduce modified value functions. Specifically, the action-state value function Q^π is adapted to incorporate entropy bonuses at each time step: 0.9!Q^π(s_t,a_t) = E [R(s_t,a_t,s_t+1) + γ(Q^π(s_t+1,a_t+1) + α H(π(·| s_t+1)))] 0.9!=E [R(s_t,a_t,s_t+1) + γ(Q^π(s_t+1a_t+1) -αlog (π(a_t+1| s_t+1)))]This approach considers two parameterized soft Q-functions, Q(s_t, a_t), and a policy represented as π(a_t|s_t). Each component relies on specific parameter sets, denoted as ϕ_1, ϕ_2, and θ, respectively. For instance, the Q-functions can be modeled as expressive neural networks. Actions are resampled as ã_t+1∼π (·| s_t+1) under the current policy instead of being drawn from the replay buffer. Therefor, it is essential to distinguish newly sampled actions. The parameters of the soft Q-function can then undergo training aimed at minimizing the soft Bellman residual:J_Q_i(θ) = E_(s_t,a_t) ∼ D[1/2(Q_ϕ(s_t,a_t) - Q̂(s_t,ã_t))^2]where the target Q is represented as:Q̂(s_t,ã_t) = r_t +γ (Q(s_t+1,ã_t+1) - αlog (ã_t+1| s_t+1))Equation (<ref>) can be optimized using gradient descent:∇̂_ϕ J_Q(ϕ) = ∇_ϕ Q_ϕ(s_t,a_t)(Q_ϕ(s_t,a_t) - Q̂(s_t,a_t)) The policy parameters can be acquired by minimizing the Kullback-Leibler divergence <cit.>. By reparameterizing the expectation, we can now define the policy objective as: J_π (θ) = E_s_t ∼ D[ logπ_θ (ã_θ (s_t)| s_t) - Q_ϕ(s_t,ã_θ (s_t))] Then, we will approximate the gradient of (<ref>) as follows:∇̂_θ J_π(θ) = ∇_θlog(π_θ(ã_t | s_t))+ (∇_ã_tlog(π_θ(ã_t | s_t)) - ∇_ã_t Q(s_t,ã_t)) ∇_θã_θ(s_t) Through iterative interactions and data collection, the Q-function and policy networks will reach a state of convergence, which enables the agent to obtain the maximum reward in each episode.§ METHODOLOGY§.§ Problem Formulation The autonomous driving problem can be delineated into several primary objectives. These objectives include arriving at the designated destination while ensuring collision avoidance and navigation within the road lane with minimal error. Each state s_t encompasses the data acquired from the car front camera and the tracking sensor. These inputs are combined in a structured manner, as described in the subsequent section, to generate the desired control command. Car navigation relies on two controls: longitudinal and lateral. Consequently, the DRL agent produces two control commands: acceleration and steering angle. The car acceleration range is [0,1] and the steering range is [-1,1]. Furthermore, the episode return R_t for the problem is established by (<ref>), ensuring the car navigates between the designated boundaries towards the specified destination. R_t = ∑_t^ | v_t cos(ϕ_t)|- | v_t sin(ϕ_t)|- | v_t| | d_t| Where the angle ϕ_t represents car orientation relative to the road, the cosine term denotes the speed aligned with the road length, and the sine term represents the speed perpendicular to the road. By formulating the return function as (<ref>), the RL agent maximizes car velocity v_t by increasing the first term and decreasing the second one. Furthermore, the third term aims to minimize vehicle deviation from the center of the lane, as indicated by the distance d_t.Fig. <ref> visually represents the reward parameters of the vehicle on the road.The episode terminates in the event of a collision or the car deviating from its designated lane, resulting in a reward of -200. Conversely, upon reaching the goal destination, a reward of 100 is recorded for the episode. Consequently, the agent aims to navigate to the destination without collisions to maximize the overall return.§.§ Sensor Data Fusion The fusion module within our RL agent enhances its decision-making capabilities and overall performance by integrating various data sources and unifying information.This module plays a vital role in both the actor and critic components, effectively improving car navigation by identifying the optimal approach to information fusion. A visual representation of the fusion module structure is presented in Fig. <ref>.The fusion module within the system employs a dual-branch approach for integrating the information derived from the image and the tracking sensors. The image branch encompasses multiple residual blocks and a fully connected layer.The residual block extracts image features using multiple convolutional layers ℱ_W_i and a shortcut path (<ref>). In order to ensure compatibility of dimensions, a projection W_s is additionally employed usinga convolution layer in the direction of the shortcut path. Incorporating skip connections within these residual blocks facilitates the direct propagation of information, thereby mitigating the vanishing gradient problem <cit.>. Moreover, a fully connected layer transforms the features extracted from the image into the desired dimensional representation.features = ℱ_W_i(image) + W_simage In the other branch, the data obtained from the tracking sensor undergoes resizing via a fully connected layer. The fused information is transmitted throughout the remaining network structure by establishing a connection between these two branches. §.§ Proposed Method ArchitectureOur proposed method represents a combination of sensor fusion and SAC architecture for controlling autonomous vehicles. SAC is chosen in this research due to its vital attributes in stability, exploration, and robustness <cit.>. The fusion structure proposed in the previous section is implemented at the outset of each function approximator (Q-network 1, Q-network 2, and the actor). This involves concatenating camera and tracking sensor data features, utilizing them as input to the model while carefully considering the importance of each data component in each function approximator to achieve optimal results. As illustrated in Fig. <ref>, the actor model is trained based on the approximated action-value function, and after a sufficient number of training episodes, we can attain the desired level of performance. The reason for merging multiple inputs with the proposed fusion structure is to extract features at the earlier layers and combine these representations at the subsequent layers to create a more comprehensive understanding of the environment. Having a greater variety of sensor types implies increased data dimensions collected from the environment. Higher dimensionality provides a richer source of information for making ultimate decisions. Nevertheless, selecting the most concise data representation that exhibits the highest correlation with the desired output remains critical. We used data concatenation and mapped it to a reduced-dimensional space in subsequent layers with mentioned specifications. However, this approach presents some challenges. We encounter issues such as vanishing gradients and slow convergence during feature extraction and selection processes. To navigate these challenges effectively, we added the residual structures and SAC algorithm. The pseudocode of the proposed approach is shown in the Alg. <ref>. § RESULTS AND DISCUSSION We selected CARLA <cit.>, an open-source simulator designed for AD research, for this study. CARLA provides a Python library that facilitates interaction with the environment, enabling dynamic modifications of the spawning and removal of cars and objects within the simulated environment. To assess the efficacy of the proposed method against alternative algorithms, SAC algorithm was subjected to three distinct training scenarios in CARLA. These scenarios encompassed using the sensor fusion block, solely relying on camera imagery, and tracking sensor data under similar situations. Furthermore, we employed the DDPG algorithm used in previous studies, following a similar training procedure to establish the system superiority compared to previous works <cit.>. §.§ Training We designed three different training configurations to assess the impact of sensor fusion on the RL agent performance. In the first configuration, we utilized sensor fusion, combining both image and tracking sensor data, to inform the agent decision-making process. In the second configuration, we isolated the use of only image data, while in the third configuration, we relied solely on tracking sensor data for decision-making. These configurations were designed to highlight the versatility and adaptability of our algorithm under varying data input scenarios. The same training process has been repeated for the DDPG algorithm to compare the performance of the proposed algorithm with previous works <cit.>.To create episodic tasks for training, we randomly choose two points on the town roadway and create a path from the start point to the end point.Fig. <ref> shows an example of a random path generated on the map. By training on different town maps and weather conditions, our DRL agent will become more general and robust, capable of handling various driving tasks and uncertainties.The camera image is mapped to 100 features using the proposed residual structure. Then, a fully connected layer maps 16 features from the tracking sensor to be concatenated with image features. In Q-networks, we have 116 input features for our agent; by adding two actions, throttle and steering wheel angle, 118 features will be delivered. We trained the agent for 5000 episodes with Adam optimizer and a learning rate value of 0.0001. Also, the buffer size is considered 1e6 to cover a wide range of explorations. The average reward is calculated in each episode, as mentioned before. The training and testing of the proposed approach were performed on a hardware with a Core i9 5 GHz CPU, Nvidia Geforce RTX 3080 Ti GPU with 32 GB of RAM. As shown in Fig. <ref>, the proposed fusion structure leads to faster convergence and higher average reward with minor deviation in both SAC and DDPG algorithms compared to using only one sensor data. Also, SAC can unleash the potential of the proposed structure better than DDPG due to the use of the entropy function to enhance optimization and better exploration strategy. §.§ Evaluation The validation for proposed approaches employs the Root Mean Square Error (RMSE) metric, designed to evaluate the error between each algorithm and the ground truth provided by the CARLA Simulator. This approach ensures that the performance analysis of algorithms adheres to a uniform criterion.In order to compare the trained agents, 25 different routes are designated on the map. The agents are navigated along these routes, facilitating the calculation of the RMSE between the actual route and an ideal route derived from the interpolation of the waypoints. Table <ref> shows the RMSE yielded when the agent navigates the route for 25 iterations. Additionally, the table reports the maximum and minimum error recorded along the route, as well as the error standard deviation (std) of the agent driving from the initial to the terminal point. Our evaluation results demonstrate the significant improvement achieved through sensor fusion. We observed a remarkable improvement in the overall error when employing sensor fusion, indicating that the agent made more informed and successful decisions.On the other hand, employing solely the image as the agent input shows superior performance compared to relying on the tracking sensor. This disparity arises from the agent ability to determine road segments and anticipate forthcoming actions, particularly during turning maneuvers. Conversely, the information derived from the tracking sensor only provides the agent with the present state of the vehicle. A comparison between SAC and DDPG also shows the relatively lower error shown by SAC agent. This advantage comes from the effective employment of a stochastic policy, careful consideration of entropy function, and an improved exploration process, all aimed at attaining the optimal policy. These findings underscore the crucial role of sensor fusion in enhancing the overall performance and robustness of our RL algorithm, making it a valuable approach for a wide range of applications where both image and tracking sensor data are available.§ CONCLUSION In this study, we have introduced an intelligent approach for car navigation by integrating sensor fusion into deep reinforcement learning. To enhance the agent learning performance and ascertain an optimal information integration method, we developed a fusion block that merges data acquired from the vehicle camera and tracking sensor. Our proposed methodology was evaluated within the CARLA simulation platform, which provides realistic simulations and offers high diversity and repeatability in autonomous driving.To substantiate the efficacy of our method in contrast to prior researches, we conducted a comprehensive evaluation across three distinct scenarios, encompassing tracking sensor data, image data, and the fusion of both. The results obtained from these evaluations underscore the significant impact of employing sensor fusion along with SAC algorithm to enhance the performance of DRL algorithms in car navigation. Future works for this research include the real-world implementation of the algorithm, addressing uncertainties inherent in practical settings, and increasing the variety of sensors employed in the navigation system. These explorations hold the promise of advancing the state-of-the-art in autonomous vehicle navigation.ieeetr | http://arxiv.org/abs/2312.16620v1 | {
"authors": [
"Amin Jalal Aghdasian",
"Amirhossein Heydarian Ardakani",
"Kianoush Aqabakee",
"Farzaneh Abdollahi"
],
"categories": [
"eess.SY",
"cs.SY"
],
"primary_category": "eess.SY",
"published": "20231227155938",
"title": "Autonomous Driving using Residual Sensor Fusion and Deep Reinforcement Learning"
} |
Subsets and Splits